So with Crypto mining being less profitable and miners selling their rigs for cheap I thought how can I get my self a cheap managed network switch. I saw mining motherboards with 12 PCIe 2.0 1x slots(4gbit bandwidth) and tought hey if I plug in some cheap 10g 2port network adapters I can make my own network switch with exactly the ports I need(SFP+ rj45). Put opensence on it and boom Managed network switch with multigig(2gbit per port). Is there something I am missing or have I found a way to get cheap multigig? Also can anyone who has a 10g only network switch tell me what kind of power it is using per port so I can compare? Thanks for debunking my idea and saving me a few bucks.
This approach will take an insane amount of power compared to a real switch
I measured on my server each sfp+ card takes 10W, plus my ASRock BTC motherboard (which I bought for the many PCI lanes ignoring the low speeds, making them almost useless for what I wanted to do, a Nas) consumes 40w at idle. So an 8 port switch would take 120w. Not too much but much higher
So, sort of along the same vein. I recently wanted to build a cheap 10G router. I found this SuperMicro X10SLH-N6-ST031 on ebay with 6x 10G RJ45 on it (they don’t do auto negotiation, and only run in either 1G or 10G) for roughly $60. This specific seller gave me a compatible CPU for free, already in the mobo. Of note, this motherboard is slightly longer than a micro-ATX motherboard, but still smaller than a full sized ATX board. Knowing this, I went for the safest option of finding a case that was compatible with both micro and full ATX. I ended up going with a Silvertone GD09B HTPC chassis.
After finding that, I wanted to make sure I could update the BIOS to include patches for Spectre and Meltdown. Lucky for me, users at servethehome already did this! The modded BIOS also enabled NVMe support. There’s also a lot of great info about this board and related projects in that thread.
It works perfectly for my use case. I think with the NICs, it runs at about 100W idle. More than a normal switch, but cheaper for the hardware at least compared to other 10G switches. Might be worth considering!
100W idle? Wow, that’s worse than I though.
The price seems great though, but no fall-back to 2.5gbit is also a bit problematic in a homelab setup.
Yeah. My solution is to do the negotiation on my microtik switch. Basically I input 10G to the switch from the router, and then use the switch to change the speed of the connection.
Where I live 100W would cost me something like $25/mo to run, continuously, for the life of the device. I think my 8x10Gb + 24x1g switch draws around 15-20W if I’m not using PoE and I spent around $260 on it. Inside a year the commodity switch becomes cheaper to own even if I were given a whitebox equivalent for free.
If I lived somewhere with lower kWh costs I’d be happy to roll my own whitebox but it’s just not viable here.
I do admit I am graced by low power costs due to hydro, and if that wasn’t the case I probably wouldn’t have done it.
How cheap are you seeing this hardware? In the UK at least I could get an 8 port managed switch for £25 (on sale, full price £35)
https://www.amazon.co.uk/TP-Link-Snooping-Monitoring-Interface-TL-SG608E/dp/B0BVRK6L2V
A PCIE 2x gigabit network card runs at £30 each, to match ports itd cost £120
https://www.amazon.co.uk/Binardat-Gigabit-Network-Controller-Ethernet/dp/B0C4H4WNL9
That is not for multigig yes for 1gbit this ain’t worth it but for 2.5 or 10 it is a bit cheaper €20 per 2 10g ports
Check out decommissioned Brocade or Ruckus switches on eBay if they’re available in your region. There’s a thread over on ServeTheHome with a licensed feature unlocking method that’ll get you fully enabled hardware for cheap cheap cheap prices.
On my ICX6450-24P I see like, <20W power use while shoveling packets through four 10G ports. PoE drives usage up, obviously, if I’m supplying power to things with it.
If you can find them the ICX7250 is a baller homelab switch, I paid <$250 for mine and I’m putting the 8x10Gb ports to good use. Low power draw here as well.
If you want bigger the ICX6610 is a power hog but offers a couple of 40Gb ports as well as 8x10Gb. These draw significantly more power, they’re powerpc rather than arm, and are loud compared to home gear but they’ll let you link a bunch of machines at 10Gb as well as two at 40Gb - which is awesome for a NAS and/or firewall. I want to say mine drew like 50W at idle so they’re not cheap to run over time.
I’ve also considered rolling my own whitebox router so I can connect 2.5 and 5Gb gear to the rest of my network but it’s not that much cheaper than just buying a cheap chinesium $200 dumb switch with 4-8x 2.5Gb ports and a couple of 10Gb uplinks.
Surprised I’ve never seen this DIY approach mentioned anywhere or thought of it before 🤔 - usually people end up going for those mini PCs that have multiple network cards soldered to the mobo itself
Compared to an actual 10gig switch, the power consumption might be high (unless the network card drivers have been well optimised by the devs, offloading as much traffic handling as possible to the 10gig cards’ own CPUs). In this case just make sure you have a powerful enough CPU to handle that traffic, as well as handle that 4gig traffic traveling between the network cards over PCIe.
Some gotchas to look out for though:
- PCIe lane wiring… are they going straight to the CPU, or are they going via the chipset (or slightly slower, a PCIe switch connected to the chipset). The mobo manual can advise on this, ideally you’d want something with as much PCIe lanes connected to the CPU directly to get the full speed.
- Power consumption… touched on this earlier but one to be aware of, esp if you live somewhere where electricity is expen$iv€
- Noise… you might need to buy a fan to cool down the network cards depending on your traffic, and how much the OS driver offloads to your 10gig cards
- A backup… if you need to do changes to your DIY switch, make sure you have some way of accessing the internet
- Bridging… there may be an ideal/recommended way to set up bridging for multiple interfaces on the same network card, to take advantage of hardware offloading, allowing you to get 10gig traffic between two devices even though the PCIe lane is just 4gbit
- Traffic filtering… again just ensure your CPU can handle it, particularly for HTTP traffic. I only do filtering on DNS traffic due to having a weak CPU, works well enough to catch some ad/tracking services that employ nasty tricks to evade blocklists.
I only have 1gig hardware so can’t really provide comparisons :( however there’s a youtube channel called ServeTheHome that started measuring power consumption of almost all the hardware they test - if they’ve done 10gig switches recently then that should give you some pointers at least
Edit: fix formatting 🫠
Surprised I’ve never seen this DIY approach mentioned anywhere or thought of it before 🤔 - usually people end up going for those mini PCs that have multiple network cards soldered to the mobo itself
Rolling a whitebox router is so much :effort: when decommissioned enterprise gear is dumped on fleaBay so cheaply. Plus it’s almost impossible to rival the power efficiency of a commercial switch without blowing more money than you’d pay for one.
I’ve kicked the idea around as a way to hook up multigig devices to my network (managed 2.5Gb + 10Gb switches are still expensive) but by the time you’ve built the machine you’re looking at the same cost and you have to maintain it, plus your network is down for however long it takes to reboot the thing after kernel package updates.
Okey I see everyone telling me power will be 100w+ so I think I won’t do it as we pay around the 30-50 cents per kWh but I see why purpose made hardware is more expensive now