• Psythik@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    19 hours ago

    Gemini once told me to “please wait” while it did “further research”. I responded with, “that’s not how this works; you don’t follow up like that unless I give you another prompt first”, and it was basically like, “you’re right but just give me a minute bro”. 🤦

    Out of all the LLMs I’ve tried, Gemini has got to be the most broken. And sadly that’s the one LLM that your average person is exposed the most to, because it’s in nearly every Google search.

    • SSUPII@sopuli.xyz
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      Gemini gets constantly glazed by the AI enthusiasts community because it often passes benchmarks very well when it is literally one of the worst ones to use.

    • DragonTypeWyvern@midwest.social
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      18 hours ago

      I’d argue that Gemini is actually really good at summarizing a Google search, filtering the trash from it, and convincing people not to click the actual links that is how Google makes money.

      • Psythik@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        16 hours ago

        Yeah but when it’s a total crapshoot as to whether or not its summary is accurate, you can’t trust it. I adblocked those summaries cause they’re useless.

        At least some of the competing AIs show their work. Perplexity cites its sources, and even ChatGPT recently added that ability as well. I won’t use an LLM unless it does, cause you can easily check the sources it used and see if the slop it spit out has even a grain of truth to it. With Gemini, there’s no easy way to verify anything it said beyond just doing the googling yourself, and that defeats the point.