I was watching the RFK Jr questioning today and when Bernie was talking about healthcare and wages I felt he was the only one who gave a real damn. I also thought “Wow he’s kinda old” so I asked my phone how old he actually was. Gemini however, wouldnt answer a simple, factual question about him. What the hell? (The answer is 83 years old btw, good luck america)

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      18
      ·
      edit-2
      5 hours ago

      I mostly can’t understand why people are so into “LLMs as a substitute for Web search”, though there are a bunch of generative AI applications that I do think are great. I eventually realized that for people who want to use their cell phone via voice, LLM queries can be done without hands or eyes getting involved. Web searches cannot.

      • Kairos@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        28 minutes ago

        Because web search was intentionally hobbled by Google so people are pushed to make more searches and see more ads.

      • JustAnotherKay@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 minutes ago

        Web searches cannot

        Web search by voice was a solved problem in my recent memory. Then it got shitty again

      • Squorlple@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        5 hours ago

        Would saying “Gemini, open the Wikipedia page for Bernie Sanders and read me the age it says he is”, for example, suffice as a voice input that both bypasses subject limitations and evades AI bullshitting?

    • halcyoncmdr@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      To be honest, that seems like it should be the one thing they are reliably good at. It requires just looking up info on their database, with no manipulation.

      Obviously that’s not the case, but that’s just because currently LLMs are a grift to milk billions from corporations by using the buzzwords that corporate middle management relies on to make it seem like they are doing any work. Relying on modern corporate FOMO to get them to buy a terrible product that they absolutely don’t need at exorbitant contract prices just to say they’re using the “latest and greatest” technology.

      • SmoothLiquidation@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        33 minutes ago

        To be honest, that seems like it should be the one thing they are reliably good at. It requires just looking up info on their database, with no manipulation.

        That’s not how they are designed at all. LLMs are just text predictors. If the user inputs something like “A B C D E F” then the next most likely word would be “G”.

        Companies like OpenAI will try to add context to make things seem smarter, like prime it with the current date so it won’t just respond with some date it was trained on, or look for info on specific people or whatnot, but at its core, they are just really big auto fill text predictors.

      • DrFistington@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 hours ago

        Yeah, I still struggle to see the appeal of Chatbot LLMs. So it’s like a search engine, but you can’t see it’s sources, and sometimes it ‘hallucinates’ and gives straight up incorrect information. My favorite was a few months ago I was searching Google for why my cat was chewing on plastic. Like halfway through the AI response at the top of the results it started going on a tangent about how your cat may be bored and enjoys to watch you shop, lol

        So basically it makes it easier to get a quick result if you’re not able to quickly and correctly parse through Google results… But the answer you get may be anywhere from zero to a hundred percent correct. And you don’t really get double check the sources without further questioning the chat bot. Oh and LLM AI models have been shown to intentionally lie and mislead when confronted with inaccuracies they’ve given.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          34 minutes ago

          Yeah, I still struggle to see the appeal of Chatbot LLMs.

          I think that one major application is to avoid having humans on support sites. Some people aren’t willing or able or something to search a site for information, but they can ask human-language questions. I’ve seen a ton of companies with AI-driven support chatbots.

          There’s sexy chatbots. What I’ve seen of them hasn’t really impressed me, but you don’t always need an amazing performance to keep an aroused human happy. I do remember, back when I was much younger, trying to gently tell a friend who had spent multiple days chatting with “the sysadmin’s sister” on a BBS that he’d been talking to a chatbot – and that’s a lot simpler than current systems. There’s probably real demand, though I think that this is going to become commodified pretty quickly.

          There’s the “works well with voice control” aspect that I mentioned above. That’s a real thing today, especially when, say, driving a car.

          It’s just not – certainly not in 2025 – a general replacement for Web search for me.

          I can also imagine some ways to improve it down the line. Like, okay, one obvious point that you raise is that if a human can judge the reliability of information on a website, that human having access to the website is useful. I feel like I’ve got pretty good heuristics for that. Not perfect – I certainly can get stuff wrong – but probably better than current LLMs do.

          But…a number of people must be really appallingly awful at this. People would not be watching conspiracy theory material on wacky websites if they had a great ability to evaluate it. It might be possible to have a bot that has solid-enough heuristics that it filters out or deprioritizes sources based on reliability. A big part of what Web search does today is to do that – it wants to get a relevant result to you in the top few results, and filter out the dreck. I bet that there’s a lot of room to improve on that. Like, say I’m viewing a page of forum text. Google’s PageRank or similar can’t treat different content on the page as having different reliability, because it can only choose to send you to the page or not at some priority. But an AI training system can, say, profile individual users for reliability on a forum, and get a finer-grained response to a user. Maybe a Reddit thread has material from User A who the ranking algorithm doesn’t consider reliable, and User B who it does.

  • Robotunicorn@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 hours ago

    That’s because Google is not only a shit company, it’s also in Trump’s pocket. They changed the name of the Gulf of Mexico for US users. WTF?

  • Jo Miran@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 hours ago

    It looks like they disabled responses about current major political figures.