• Lodespawn@aussie.zone
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 days ago

    Arguably it is a problem with the LLMs because they are being trained on and unknowable amount of garbage data. It’s a garbage in garbage out problem, if the people training their LLMs are not vetting the data being input then you have to assume that any data output by the LLM contains some level of garbage.

    The solution is to only use them for non-critical use cases and vett everything they output.