Gemini once told me to “please wait” while it did “further research”. I responded with, “that’s not how this works; you don’t follow up like that unless I give you another prompt first”, and it was basically like, “you’re right but just give me a minute bro”. 🤦
Out of all the LLMs I’ve tried, Gemini has got to be the most broken. And sadly that’s the one LLM that your average person is exposed the most to, because it’s in nearly every Google search.
Gemini gets constantly glazed by the AI enthusiasts community because it often passes benchmarks very well when it is literally one of the worst ones to use.
I’d argue that Gemini is actually really good at summarizing a Google search, filtering the trash from it, and convincing people not to click the actual links that is how Google makes money.
Yeah but when it’s a total crapshoot as to whether or not its summary is accurate, you can’t trust it. I adblocked those summaries cause they’re useless.
At least some of the competing AIs show their work. Perplexity cites its sources, and even ChatGPT recently added that ability as well. I won’t use an LLM unless it does, cause you can easily check the sources it used and see if the slop it spit out has even a grain of truth to it. With Gemini, there’s no easy way to verify anything it said beyond just doing the googling yourself, and that defeats the point.
Gemini once told me to “please wait” while it did “further research”. I responded with, “that’s not how this works; you don’t follow up like that unless I give you another prompt first”, and it was basically like, “you’re right but just give me a minute bro”. 🤦
Out of all the LLMs I’ve tried, Gemini has got to be the most broken. And sadly that’s the one LLM that your average person is exposed the most to, because it’s in nearly every Google search.
Gemini gets constantly glazed by the AI enthusiasts community because it often passes benchmarks very well when it is literally one of the worst ones to use.
I’d argue that Gemini is actually really good at summarizing a Google search, filtering the trash from it, and convincing people not to click the actual links that is how Google makes money.
Yeah but when it’s a total crapshoot as to whether or not its summary is accurate, you can’t trust it. I adblocked those summaries cause they’re useless.
At least some of the competing AIs show their work. Perplexity cites its sources, and even ChatGPT recently added that ability as well. I won’t use an LLM unless it does, cause you can easily check the sources it used and see if the slop it spit out has even a grain of truth to it. With Gemini, there’s no easy way to verify anything it said beyond just doing the googling yourself, and that defeats the point.