

For the chain of thought instruction following model gpt-oss-20b, I’ve noticed its reasoning content often includes it talking about stuff it is supposed to avoid in the final output and it double checking that it doesn’t have that forbidden output. So it would waste tokens talking about pink elephants in its reasoning content, but then do okayish at avoiding pink elephants in its final output.



You’ve described the problem with generalization yes. Well, you could maybe sort of train it not to generate “all men are cats”, but then that might also prevent it from making the more correct generalization “all cats are mortal” or even completely valid generalizations like combing “all men are mortal” and “Socrates is man” to get “Socrates is mortal”.
The problem with monofacts is a bit more subtle. Let’s say the fact that “John Smith was born in Seattle in 1982, earned his PhD from Stanford in 2008, and now leads AI research at Tech Corp,” appears only once in the training data set. Some of the other words the model will have seen multiple times and be able to generate tokens in the right way for. Like Seattle as a location in the US, Stanford as a college, 2008 as a date, etc. But the combination describing a fact about John Smith appearing uniquely trains the model to try to generate facts that are unique combinations of data. So the model might try to make up a fact like “Jane Doe was born in Omaha in 1984, earned her master from Caltech in 2006, and is now CEO of Tech Corp” because it fits the pattern of a unique fact that was in its training data set.