Definition of can dish it but can’t take it
old (1 year = 1000 years in AI) accusation not relevant to expected upcoming deepseek breakthrough model. Distillation is used to make smaller models, and they are always crap compared to training on open data. Distillation is not a common technique anymore, though it’s hard to prove that more tokens wouldn’t be “cheat code”
This is more a desperation play from US models, even as youtube is in full, “buy $200/month subscriptions now or die” mode.
Big “I’m telling Mom” energy.

Classic pull up the ladder behind you move.
Kind of hilarious that one component of their complaint is that the DeepSeek model is more energy/computation efficient than theirs. Welcome to the free market?!
OpenAI: “They stole our technology!”
Also OpenAI: “Uh, well, our technology is actually inferior to theirs, but they must have stole it and made massive sweeping improvements to it that we weren’t able to! How dare they!”
OpenAI should have been fucking open in the first place. The Chinese are the only ones bother to open-source their models, and the US corpo’s decision to immediately close-source everything going to fuck them over in the end.
How dare you steal our technology! We stole it first!
Ain’t no copyright in an AI world
But it’s Open AI ?
Oh no!
OpenAI should copywrite their work. I’m sure no one would dare steal someone else’s hard work for their AI model development!
It steals from everyone, but if it’s from itself, it’s too much.
Can dish what? They haven’t made a profitable product ever. If you had a lemonade stand you are more profitable than those fucks.
DeepSeek API isn’t free, and to use Qwen you’d have to sign up for Ollama Cloud or something like that, as Local deploying is prohibitive.
They’re trying to link DeepSeek to the old tale freeride companies that apparently have ties to the original company product and gets a “look the other way” attitude from it (e.g. Meta with their Whatsapp products). This situation is nothing like it.
DeepSeek API isn’t free, and to use Qwen you’d have to sign up for Ollama Cloud or something like that
To use Qwen, all you need is a decent video card and a local LLM server like LM Studio.
Local deploying is prohibitive
There’s a shitton of LLM models in various sizes to fit the requirements of your video card. Don’t have the 256GB VRAM requirements for the full quantized 8-bit 235B Qwen3 model? Fine, get the quantized 4-bit 30B model that fits into a 24GB card. Or a Qwen3 8B Base with DeepSeek-R1 post-trained Q 6-bit that fits on a 8GB card.
There are literally hundreds of variations that people have made to fit whatever size you need… because it’s fucking open-source!
Training LLMs is very costly, and open-weights aren’t open-source. For example, there are some LLMs in Brazil, but there is a notable case for a brazilian student on the University of Dusseldorf that banded together with two other students of non-brazilian origin to make a brazilian LLM. 4B model. They used Google to train the LLM, I think, because any training on low VRAM won’t work. It took many days and over $3000 dollars. The name is Tucano.
I know it looks cheap because there are many, but many country initiatives are eager on AI technology. It’s costly.








