I’ve got a homelab setup that could benefit from low-power AI acceleration, which could let me run Whisper and distilled models locally and integrate with my serviced ie. home assistant. Plus, the less data I send over my network the happier I’ll be.
I don’t really want to stuff a GPU into my system right now, I dont have much power budget and GPUs can get pricey for the cost of one that’s useful. I’ve seen a few examples of “Edge accelerators” which boast a super tiny (2-5w) power envelope and 40 TOPs, but that doesn’t tell me much about how well models will actually work in practice.
Is there any kind of mapping between TOPs and, say, tokens per second for X model? Maybe recommended TOPs for X model?


Yeah I could do that tbh. DDR4 isn’t so bad price-wise…
I’ll see what the lower budget cards of the last few years look like. I’m a lazy sod.
Idle power consumption isn’t a massive issue for me, but I’m more finnicky about it with my NAS as I’d prefer have the cooling and power reserved for my drives (and expansion thereof).