You probably wouldn’t use an LLM for error correction like that, and it wouldn’t be pricing based on type of word, but rather API costs and MCP surcharges, but I think that’s a pretty accurate assessment of the situation.
Current pricing models are based on input and output tokens, but I expect a monthly minimum alongside the API costs soon. The
Programmers are probably getting the most out of LLMs right now and eventually Anthropic will realize that they are losing money off Claude code and start squeezing soon.
You probably wouldn’t use an LLM for error correction like that, and it wouldn’t be pricing based on type of word, but rather API costs and MCP surcharges, but I think that’s a pretty accurate assessment of the situation.
Yeah I doubt even LLMs charge by word. They charge by token and API calls.
Current pricing models are based on input and output tokens, but I expect a monthly minimum alongside the API costs soon. The Programmers are probably getting the most out of LLMs right now and eventually Anthropic will realize that they are losing money off Claude code and start squeezing soon.