This is generally correct, though diffusion models and GPTs work in totally different ways. Assuming an entity had lawful access to the image in the first place, nothing that persists in a trained diffusion model can be realistically considered to be a copy of any particular training image by anyone who knows wtf they’re talking about.
The problem is that CS curricula vary quite a bit from university to university. Having ABET accreditation helps (or at least used to) a good bit with this as it required a program to include certain brass tacks material as well as (workforce) project participation items. However, many of those accredited programs effectively emerged out of EE departments so there’s a very weird skewing effect in the field.