I stole this from LinkedIn.

  • scops@reddthat.com
    link
    fedilink
    English
    arrow-up
    26
    ·
    22 hours ago

    I support a call center and we’re about to implement an AI agent. We’re paying for a model that essentially can talk and has “learned how to learn”, but is otherwise dumb. It’s trained on a very small amount of information, anything we’d give to a real agent, plus the public info on our website.

    The result of this should be a bot that says, “I don’t know, should I transfer you to a real person?” a lot, but should hopefully never hallucinate or teach someone how to build a bomb or something.

    Dunno how others do it though

    • 0_o7@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      The one you’re using isn’t probably a wrapper around OpenAI or other cloud based API, the ones that are misconfigured are more prone to these types of abuse.

    • WoodScientist@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      17 hours ago

      The result of this should be a bot that says, “I don’t know, should I transfer you to a real person?” a lot, but should hopefully never hallucinate or teach someone how to build a bomb or something.

      This is in contrast for the AI agent for my company, whose customer service number is 1-800-BLD-A-BMB.

    • MinnesotaGoddam@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      20 hours ago

      hopefully never hallucinate or teach someone how to build a bomb or something.

      that’s so fucking easy you just lick toads until you find the right one who needs to go to the internet for that.