I’m curious about what the consensus is here for which models are used for general purpose stuff (coding assist, general experimentation, etc)
What do you consider the “best” model under ~30B parameters?
Qwen3-30B-A3B-2507 family is an absolute beast. The reasoning models are seriously chatty in their chain of thought, but the results speak for themselves. I’m running a Q4 on a 5090, and with a Q8 KV quant, I can run 60k token context entirely in vram, which gets me up to 200 tokens per second.
Not sure I want to name any names… 😂
In my opinion, Qwen3-30B-A3B-2507 would be the best here. Thinking version likely best for most things as long as you don’t mind a slight penalty to speed for more accuracy. I use the quantized IQ4_XS models from Bartowski or Unsloth on HuggingFace.
I’ve seen the new OSS-20B models from OpenAI ranked well in benchmarks but I have not liked the output at all. Typically seems lazy and not very comprehensive. And makes obvious errors.
If you want even smaller and faster the Qwen3 Distill of DeepSeek R1 0528 8B is great for its size (esp if you’re trying to free up some VRAM to use larger context lengths)
That’s what I’m using, and it’s pretty nice. Thanks for your input!
I’m a big fan of NousResearch their deephermes release was awesome and now I’m trying out Hermes 4. I have an 8gb 1070ti GPU was able to fully offload a medium quant of hermes 4 14b with an okay amount of context.
I’m a big fan of the hybrid reasoning models I like being able to turn thinking on or of depending on scenario.
I had a vision model document scanner + TTS going on with a finetune of qwen 2.5 vl and outetts.
If you care more about character emulation for writing and creativity then mistral 2407 and mistral NeMo are other models to check out.
Unlike most of you here reading this, I don’t allow a corporate executive/billionaire or a distant nation-state to tell me what I am permitted to say or what my model is allowed to output, so I use an uncensored general model from here (first uncheck “proprietary model” box).
How do you remove all the propaganda they are already trained on ? You reject Deepseek, but you are just allowing yourself to being manipulated by a throng of old propaganda/censorship from the normal internet - garbage manipulative information that is stored in the weights of your ‘uncensored’ model. ‘Freeing’ a model to say “shit”, is not the same as an uncensored model that you can trust. I think we need a dataset cleansed from the current popular ideology and all propaganda against ‘wevil nationstates’ that have just rejected the western/US dominance (giving the middle-finger to western oligarchs)…
That is awesome, thank you for that link!
This leaderboard is a gem! This should be a separate post, thank you!
The votes here seem to disagree with you
Oh no, I must be wrong then :(
deleted by creator
I am running the Irix model.
You can download and run locally any of the non-proprietary models listed there in that leaderboard, so I don’t understand what you are trying to say by addressing the leaderboard scripts. Since you have no proof of this happening, and I can’t find anything about what you are talking about, you are speaking literal fucking nonsense.
So please elaborate.
That paragraph about January 6th and 4chanGPT is making me think you are mentally unstable. Please translate that shit into English for me, because either I’m very dumb or we both are.
deleted by creator
Qwen 2.5 VL and Code. I have a VL doing image captions for LoRA training running now. A 14B is okay for basic code. A quantized 32B 6KL gguf of the same Qwen 2.5 code model runs on 16GB but at a third of the speed of the 14B in bits and bytes 4b. The latter is reasonably fast enough for a couple layers of agentic stuff in emacs with gptel and hits thinking or function calling out of a llama.cpp server better than 50% of the time.
I still haven’t tried the new 20B out of Open AI yet.
I really liked Mistral-Nemo-Instruct for its allround capabilities. But it’s old and hardly the best any more. But I feel lots of newer models are sycophants and tuned more for question answering and assistant stuff, and their ability to write long-form prose or role play as a Podcast host hasn’t really improved substancially. These days I switch models. I’ll use something more creative if I want that, or switch to a model dedicated to coding if I want autocomplete. But to be honest, coding isn’t on my list of requirements any more. I’ve tried AI coding and it’s not really helping with what I do. I regularly waste some extra 30%-100% of time if I do it with AI, and that’s with the huge commercial services like AIstudio or ChatGPT.
Yeah a true Nemo successor is sorely overdue
I use Qwen coder 30B, testing Venice 24B, also going to play with Qwen embedding 8B and Qwen resorter(?) 8B. All with Q4.
They all run pretty well on the new MacBook I got for work. My Linux home desktop has far more modest capabilities, and I generally run 7B models, though gpt-oss-20B-Q4 runs decently. It’s okay for a local model.
None of them really blow me away, though Cline running in VSCode with Qwen 30B is okay for certain tasks. Asking it to strip all of the irrelevant html out of a table to format as markdown or asciidoc had it thinking for about 10 minutes before asking specifically which one I wanted - my fault, I should’ve picked one. Wanted markdown but thought adoc would reproduce it with better fidelity (table had embedded code blocks) and so I left it open to interpretation.
By comparison, ChatGPT ingested it and popped an answer back out in seconds that was wrong. So Idk, nothing ventured, nothing gained. Emphasis on the latter.