I placed a low bid on an auction for 25 Elitedesk 800 G1s on a government auction and unexpectedly won (ultimately paying less than $20 per computer)
In the long run I plan on selling 15 or so of them to friends and family for cheap, and I’ll probably have 4 with Proxmox, 3 for a lab cluster and 1 for the always-on home server and keep a few for spares and random desktops around the house where I could use one.
But while I have all 25 of them what crazy clustering software/configurations should I run? Any fun benchmarks I should know about that I could run for the lolz?
Edit to add:
Specs based on the auction listing and looking computer models:
- 4th gen i5s (probably i5-4560s or similar)
- 8GB of DDR3 RAM
- 256GB SSDs
- Windows 10 Pro (no mention of licenses, so that remains to be seen)
- Looks like 3 PCIe Slots (2 1x and 2 16x physically, presumably half-height)
Possible projects I plan on doing:
- Proxmox cluster
- Baremetal Kubernetes cluster
- Harvester HCI cluster (which has the benefit of also being a Rancher cluster)
- Automated Windows Image creation, deployment and testing
- Pentesting lab
- Multi-site enterprise network setup and maintenance
- Linpack benchmark then compare to previous TOP500 lists
Senior year of Highschool, I put Unreal Tournament on the school server. If it were me, I’d recreate that experience, including our teacher looking around the class. That was almost 20 years ago, I hope everyone is doing alright.
I have a box with 10 old laptops that I keep around, just for that. Unreal tournament 2004, Insane, Brood Wars and all the Id classics. I don’t get to set it up a lot, but when I do it’s always a hit.
Shitty k8s cluster/space heater?
According to Bush Jr. And Cheney you are now capable of building a super computer dangerous enough to warrant a 20+ year invasion
Depending on the actual condition of all those computers and your own skill in building I’d say you could rig a pretty decent home server rack out of all of those for really most purposes you could imagine, including as a personal VPN, personal RDP to conduct work on, personal test server for experimental code and/or testing potentially unsafe downloads/links for viruses
Shit you could probably build your own OS that optimizes for all that computing power just for the funzies, or even use it to make money by contributing its computing power to a crowd sourced computing project where you dedicate memory bandwidth to the project for some grad student or research institute to do all their crazy math with. Easiest way to rack up academic citations if you ever want to be a researcher!
What are you referencing in regard to the super computer investigation? Internet search failed me
https://www.ign.com/articles/2000/12/20/iraq-scores-hordes-of-ps2s-at-us-gamers-expense
I’m pretty sure this is it.
Weird. Thanks for finding it!
Hmm, get 25 monitors and friends and play one of those starship bridge simulators like https://smcameron.github.io/space-nerds-in-space/
I volunteered as tribute to be one of these ‘Friends’
You made me remember PULSAR - Lost Colony which is a decent iteration of co-op space bridge sim!
Oh, that one was a blast! I need to get my nerd herd to revisit it… Although all we did was play liars dice while the ship was on fire.
I think the only answer is “Doom”
But can they run Crysis?
We’re you thinking like Doom Lan party, or some weird supercluster with the pure focus of running Doom?
If OP actually does do this I recommend Odamex
Although he’d also need 25 monitors lol
Although he’d also need 25 monitors lol
Back to the government auctions then!
25 machines at say 100W each is about 2.5KW. Can you even power them all at the same time at home without tripping circuit breakers? At your mentioned .12/KWH that is about 30 cents an hour, or over $200 to run them for a month, so that adds up too.
i5-4560S is 4597 passmark which isn’t that great. 25 of them is 115k at best, so about like a big Ryzen server that you can rent for the same $200 or so. I can think of various computation projects that could use that, but I don’t think I’d bother with a room full of crufty old PC’s if I was pursuing something like that.
UK here, we could run that from 1 plug.
Psh 1 plug aint shit. Every Pic I see from anyone who lives out in those ghettos of India, Central America or any spacific islands they also only rock 1 plug but theyre running the corner store, the liquor store, the hospital, their style of little school middle school and old school, 3 hair salons if Latin or 3 nail salons if spasific, Bollywood, every stadium from every country in the world cup, and always 1 dude trying to squeeze 1 more plug in cuz hes runing low bats. Idk why the American ghetto is so pussy. One time i seen a family that fuckin put covers over empty sockets?!? Come on dog thats like wearing a condom jerking off. NGL tho, I get super jelly seeing pictures from those countries thp with their thousands of power lines, phone lines, sidelines, cable lines, borderlines, internet lines… fuck I don’t know much about how my AOL works but those wizards must be streaming some Hella fast Tokyo banddrifts with all them wires.
those wizards must be streaming some Hella fast Tokyo banddrifts with all them wires.
That part is wrong for India, at least.
Here’s a random site with some stats
India, you can expect ~100Mb/s with FTTH and 50Mb/s otherwise. Reliability is even worse.Rest is right.
India has gigabit fiber
And Japan has a 300+ Tb/s connection. Your point?
My point is that the average Indian is not doing “Hella fast Tokyo banddrifts” (not sure what banddrift even means, but no).And yes, a 1Gb/s connection is theoretically available, but how many people are using the ~₹4000/month connection?
Considering how many people tend to just not have Broadband at home, relying just on mobile internet, we can see how things compare with others.
Also, to point to the tread starter, most of the “thousands of” cables that you see on poles in congested areas, are just abandoned cables from older installations which nobody cared to remove.
I’m not the same dude that was talking about banddrifts and congested poles.
Indian, btw.
Also ~100Mb/s is in no way the average speed in an Indian household. It’s usually lower. I also don’t see any specific mentions of india in your link up there to that random site.
Also ~100Mb/s is in no way the average speed in an Indian household.
You’re right. It’s not.
I also don’t see any specific mentions of india in your link up there to that random site.
I don’t see any either. Guess why. Because it only has the top 10, further emphasising the point that :
the average Indian is not doing “Hella fast Tokyo banddrifts”
I won’t be leaving all of them on for long at all. I’ve got a few basically unused 15A electrical circuits in the unfinished basement (can see the wires and visually trace the entire runs) I’ll probably only run all 25 long enough to run a linpack benchmark and maybe run some kind of AI model on the distributed compute then start getting rid of at least half of them
This is only about 21 amps. Most outlets in a home are 15amps but 20amps isn’t unheard of. From one outlet doubtful but yes one house would provide that much power easily if you split them up to three or 4 rooms on different breakers.
Now it would be fun to watch his electric meter spin like a saw blade … (yes I’m old … I remember meters that had spinning discs)
Just two 15A breakers is enough actually. Outlets are supposed to be able to sustain 80% power, so you should be able to pull 1.44kW from a singly puny Nema 5-15.
Well true but I was assuming the circuits had some things drawing a little power. Flipping on a device and tripping a breaker with 12 machines on it wouldn’t be ideal :)
I have done this before in my upstairs home lab. 3 beefy ESXi machines, some nas storage, and a basic 10gbe switch eats up a lot of a single 15amp circuit. And apparently turning on a TV pushes it over the edge. Luckily the UPS saved my but while a reset the breaker and shut some stuff off.
deleted by creator
I have a couple of these (only the G2 and G3 SFF) and they consume between 6-10w when not under load, and they max out at 35w (or 65w depending on CPU). I run proxmox with 64gb ram and they are surprisingly efficient.
Jack into the local coffee shop
That’s less than a kettle, in the UK at least.
Of course I wouldn’t want to be running that all the time, because electric ain’t cheap.
Put a different operating system on each one, and make each a gateway to access the next. See who can make it through.
HungerGamesOS. I love this idea!!!
Run 70b llama3 on one and have a 100% local, gpt4 level home assistant . Hook it up with coqui.Ai xttsv2 for mind baffling natural language speech (100% local too ) that can imitate anyone’s voice. Now, you got yourself Jarvis from Ironman.
Edit : thought they were some kind of beast machines with 192gb ram and stuff. They’re just regular middle-low tier pcs.
I tried doing that on my home server, but running it on the CPU is super slow, and the model won’t fit on the GPU. Not sure what I’m doing wrong
Sadly, can’t really help you much. I have a potato pc and the biggest model I ran on it was Microsoft phi-2 using the candle framework. I used to tinker with Llama.cpp on colab, but it seems they don’t handle llama3 yet. ollama says it does , but I’ve never tried it before. For the speed, It’s kinda expected for a 70b model to be really slow on the CPU. How much slow is too slow ? I don’t really know…
You can always try the 8b model. People says it’s really great and even replaced the 70b models they’ve been using.
Show as in I waited a few minutes and finally killed it when it didn’t seem like it was going anywhere. And this was with the 7b model…
It shouldn’t happen for a 8b model. Even on CPU, it’s supposed to be decently fast. There’s definitely something wrong here.
Hm… Alright, I’ll have to take another look at it. I kinda gave up, figuring my old server just didn’t have the specs for it
Specs? Try mistral with llama.ccp.
It has a Intel Xeon E3-1225 V2, 20gb of ram, and a Strix GTX 970 with 4gb of VRAM. I’ve actually tried Mistral 7b and Decapoda Llama 7b, running them in Python with Huggingface’s Transformers library (from local models)
These are 10 year old mid range machines. Llama 7b won’t even run well
The key is quantized models. A full model wouldn’t fit but a 4bit 8b llama3 would fit.
It would fit but it would be very slow
No. Quantization make it go faster. Not blazing fast, but decent.
I certainly wouldn’t want pay the power bill from leaving a bunch of these running 24/7, but would work fine if you wanted to learn cluster computing.
You could always load them up with a bunch of classic games and get all your friends over for a LAN party.
BOINC! Do some science. :)
Do you have particularly cheap or free electricity?
12 cents per kilowatt-hour. I certainly don’t plan on leaving more than a couple on long term. I might get lucky with the weather and need the heating though :)
Distcc, maybe gluster. Run a docker swarm setup on pve or something.
Models like those are a little hard to exploit well because of limited network bandwidth between them. Other mini PC models that have a pcie slot are fun because you can jam high speed networking into them along with NVMe then do rapid fail over between machines with very little impact when one goes offline.
If you do want to bump your bandwidth per machine you might be able to repurpose the wlan m2 slot for a 2.5gbe port, but you’ll likely have to hang the module out the back through a serial port or something. Aquantia USB modules work well too, those can provide 5gbe fairly stably.
Edit: Oh, you’re talking about the larger desktop elitedesk g1, not the USFF tiny machines. Yeah, you can jam whatever hh cards into these you want - go wild.
From the listing photos these actually have half-height expansion slots! So GPU options are practically nonexistant, but networking and storage is blown wide open for options compared to the miniPCs that are more prevalent now.
Yeah, you’ll be fairly limited as far as GPU solutions go. I have a handful of hh AMD cards kicking around that were originally shipped in t740s and similar but they’re really only good for hardware transcoding or hanging extra monitors off the machine - it’s difficult to find a hh board with a useful amount of vram for ml/ai tasks.
If I had 25 surprise desktops I imagine I’d discover a long dormant need for a Beowulf cluster.
The thought did cross my mind to run Linpack and see where I fall on the Top500 (or the Top500 of 2000 for example for a more fair comparison haha)
- Slurm cluster
- MPI development
NOT any kind of crypto mining bullshit.
There’s always a good reason not to put another crypto mining cluster into the world.