• rozodru@pie.andmc.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    the issue is at one point, like say a year or so ago, LLMs weren’t that bad. they were fairly accurate in their solutions. But lately within the past few months they’ve all collectively gotten noticeably worse. They ate up all the content available and then proceeded to start eating each others waste and vomiting out that as a solution. Claude for example at one point was a decent coding assistant. now? now 8 out of 10 solutions are hallucinations. GPT5 is a clear downgrade from previous versions and now the thing just rants and rants in hopes that somewhere in it’s rants and info dumps there’s potentially the correct solution. It also now fails to remember the context of a prompt most of the time. If you can’t get the correct solution within one answer from GPT5 you might as well just close the tab because it’s never going to get there.