We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

  • lolcatnip@reddthat.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    The thing about saying something is or isn’t conscious is that we don’t have any good theory of what consciousness even is. It’s not something we can measure. The only way we can assure ourselves that other people are conscious is that they claim to be conscious in ways we find convincing and otherwise behave in ways we associate with our own consciousness.

    I can’t think of any reason why a lump of silicon should attain consciousness because you ran the right program on it, but I also can’t see why a blob of cells should be conscious either. I also can’t think of any reason why we’d be aware of it if a lump of silicon did become conscious.