• sexhaver87@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Our thought processes contain mechanisms that are not well understood, to the point where LLMs (as a side effect of human programmers’ inability, not to their discredit) are unable to effectively mimic them. An example of this would be ChatGPT encouraging self-harm where a human assistant intelligence would not.