Hello c/homelab!

My NAS currently consists of 6TB of spinning rust, one disk only. As time goes on I increasingly think about how annoying it would be to lose it to a random drive failure.

So, I recently had an idea for a new storage setup when I saw a 2TB M.2 drive for £60-70 online. Given the low price, these drives are likely low-quality and probably cacheless too, but I have a potential solution: If I bought 4 of these and set them up in RAID10, would that be a sensible way to effectively double the speed and increase redundancy?

Yes, I know it’s probably a silly idea when I can just spend more on 2 faster and more reliable drives, but I would like to at least hear from people who might have tried something similar! So what do you think?

  • Retro@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Never tried a build withe M.2 drives, but unless you have a workload that actually needs that kind of speed, SATA SSDs might be better bang for the buck (especially if those m.2 drives aren’t nvme). If you’re just storing media or something, hard drives are probably still the way to go.

    • Pyro@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I picked M.2 because of the compact size as I’m also looking to reduce the physical size of the box, and also to reduce the noise and energy footprint (so spinning disks are out). SATA SSDs could still work, as the RAID idea can still apply to those too. I’d just have to find a better deal on them than the M.2 ones.

      • Retro@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Gotcha. If the m.2 is cheaper (and consider additional HBAs or Adapters or chassis you’d need), then I’d go m.2 still. Just figured SATA might be a little more straightforward and is typically cheaper. Seems reasonable enough.

        • Pyro@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          My plan is to, if possible, get a cheap-ish mini-PC with a low-power (but still reasonably capable) CPU like a 10700T or something with a PCIe slot. I’d then buy a PCIe riser cable and a card with extra M.2/SATA ports (depending on which drives I decide on). Next I’d design and 3D print an enclosure for whichever extra drives I buy, plus the card, to be mounted onto the mini-PC. “Quality jank” is how I’d describe it, hahaha

              • rambos@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                Yeah, I had a simmilar Idea about silent, compact, power efficient and cheap server due to limited space in a small appartment. I had old full size atx board and atx PSU so design was around that. Available cases were expensive and/or bigger than the one I made. Its perfect for me, but it might be bad idea for servers that produce a lot of heat. If you are interested check it out LINK

                • Pyro@lemmy.worldOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  silent, compact, power efficient and cheap server

                  Now you’re speaking my language!

                  I was actually looking at a business-grade SBC by Asus (I think it was part of their 4x4 lineup) that had a mobile Ryzen chip and some other decent specs, but it was unfortunately a bit too expensive for me. I also have done a bit of research into ARM-based solutions, but it’s pretty early days for cheap ARM homelab stuff.

    • Pyro@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I did see that video! And yes I fully intend to test their true capacity with something like f3 or h2testw. Thanks for the warning!

  • vsis@feddit.cl
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    RAIDs are for avoiding service disruption when a disk fails, not to avoid data loss.

    To avoid data loss you need backups. Ideally under 3, 2, 1 rule: 3 copies, in at least 2 places, 1 of them being off-site.

    • Pyro@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m aware of the 321 rule for backups, but surely a mirrored disk is still technically counts as one backup?

      • vsis@feddit.cl
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Well, kind of. But all disks are under the same conditions. A power failure could make all of them to fail. Physical impacts affects all the same way. All can be stolen together, etc. It’s better than one disk, for sure. But it’s not a real backup.

        Now, if you want to have fun setting a RAID, go ahead and post the results here. But be aware that RAIDs are not to avoid data loss.

    • Pyro@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I specifically wanted solid state to reduce noise and power consumption, because I also work in the place the NAS is and power is pretty expensive in the UK. I’m happy to have a higher initial cost to gain those two benefits.

  • bender@infosec.pub
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    You probably don’t need that kind of read/write performance in your average NAS because you’re almost certainly going to be network limited. Not sure what the specs on these cheap ones are, but something like a Samsung 970 evo from a few years ago would more than saturate a 10g link, so doubling that wouldn’t really help.

    That said, I recently built a 4 M.2 drive raid0 on my homelab server for some read heavy workloads, and things scaled close to how you’d expect with just mdadm+ext4 (about 80% of the drives’ theoretical maximum bandwidth in fio test). If you can actually use the extra IOPS or disk bandwidth, it works pretty well and was easy to do.

    • Pyro@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I think you’re half-right with the speed thing. Cacheless SSDs tend to perform VERY poorly in random 4K reads/writes (like, HDD or even SD-card poorly), so I figured any slowdowns from operations like that could be mitigated. Though I do see the value in just getting 2 better SSDs, I would have to spend even more on higher-capacity drives to make up for the lesser drive count.

      • bender@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Well, I certainly made some assumptions about your workload that may not be true. What do you use it for?

        As bulk storage (backups, media streaming, etc), random 4k reads are usually not going to be the limiting factor, except maybe for the occasional indexing by Plex etc. If you’ve instead got lots of small files you’re accessing, or are hosting something like a busy database/web server on here, then you could see significant boost, but not anywhere near as significant as just co-locating the service and the data. If your workload involves a lot of writes, then I would stay away. The MTTF on “cacheless” SSDs is pretty garbage, which seems like the biggest issue to me.

        Also, didn’t mean to suggest buying nicer drives, just using an older one I was familiar with as reference. I recently bought the 2TB 970 evo plus on sale for $80 each, which was in your price range, but not sure if that pricing made it to the UK.

        • Pyro@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          My workload is a little mixed. It’s mostly media storage, which of course is sequential. But I also occasionally use it to host game servers for my friends and I to play on. Depending on the way each game is written, it’s possible it may be reading from random locations to access assets or whatever. I’d rather that, after spending hundreds on an upgrade, the performance at least doesn’t get worse.

          I’ll do some digging into the cost of fewer but more reputable SSDs later. Thanks for your input so far!