I followed these steps, but just so happened to check on my mason jar 3-4 days in and saw tiny carbonation bubbles rapidly rising throughout.

I thought that may just be part of the process but double checked with a Google search on day 7 (when there were no bubbles in the container at all).

Turns out I had just grew a botulism culture and garlic in olive oil specifically is a fairly common way to grow this bio-toxins.

Had I not checked on it 3-4 days in I’d have been none the wiser and would have Darwinned my entire family.

Prompt with care and never trust AI dear people…

  • SzethFriendOfNimi@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    Hallucination thought does fit.

    It’s a term in the context of a source that implies untrustworthy, not authoritative and/or imagined.

    Lots of examples in every day usage or scenarios that come to mind.

    “And then I saw the defendant punch the victim and then I was blinded by the sunlight”

    Are you sure you didn’t hallucinate the entire episode? It was night after all.

    Or

    “Somebody please get these ants off of me”

    Doctor writes: Hallucinations of ants on skin

    • snooggums@midwest.social
      link
      fedilink
      English
      arrow-up
      17
      ·
      6 months ago

      Those are examples of actual hallucinations where something did not happen.

      Quoting a joke reddit thread as factual is not hallucinating. There was such a thread, but it wasn’t factual and an LLM is wrong to present it as factual.

      • SzethFriendOfNimi@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        That’s the issue. LLM’s aren’t trustworthy. They hallucinate.

        I presume, as the default, that anything a LLM produces is a hallucination right out of the gate.

        • Amoeba_Girl@awful.systems
          link
          fedilink
          English
          arrow-up
          13
          ·
          6 months ago

          “Hallucination” implies LLMs can meaningfully perceive. They can’t, they’re not made that way and they have no reason to be.

        • yuri@pawb.social
          link
          fedilink
          English
          arrow-up
          12
          ·
          6 months ago

          We’re arguing language now though, and by definition it isn’t “hallucinating”. By saying that’s what’s happening, you’re unintentionally legitimizing the “AI is making decisions” misinformation.

          To get really pedantic, “flashback” would be a better label. It’s not making things up whole cloth, just repeating stuff way out of context.