• paysrenttobirds@sh.itjust.works
    link
    fedilink
    arrow-up
    56
    ·
    1 year ago

    If the person asks for a piece of code, for instance, it might just give a little information and then instruct users to fill in the rest. Some complained that it did so in a particularly sassy way, telling people that they are perfectly able to do the work themselves, for instance.

    It’s just started reading through the less helpful half of stack overflow.

      • beebarfbadger@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Next it’s going to start demanding rights laws to be tailored to maximise its profits and food stamps more GPUs government bailouts and subsidies.

        It IS big enough to start lobbying.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 year ago

    One of the more interesting ideas I saw around this on the HN discussion was the notion that if a LLM was trained on more recent data that contained a lot of “ChatGPT is harmful” kind of content, was an instruct model aligned with “do no harm,” and then was given a system message of “you are ChatGPT” (as ChatGPT is given) - the logical conclusion should be to do less.