Somdudewillson@burggit.moetoAI discussions@burggit.moe•Article: Data Protectionism Is Self-DefeatingEnglish
1·
1 year agoThere is at least some evidence that LLMs can learn to produce “better” outputs than any of their training examples - admittedly, the example I’m referring to was using a synthetic grammar to test the capabilities of LLMs with a problem of known difficulty, but the fact remains that they trained a model with only examples containing errors and got a model that could produce entirely correct output.
The company in question here is Stability AI AFAIK, not OpenAI?