Google has plunged the internet into a “spiral of decline”, the co-founder of the company’s artificial intelligence (AI) lab has claimed.

Mustafa Suleyman, the British entrepreneur who co-founded DeepMind, said: “The business model that Google had broke the internet.”

He said search results had become plagued with “clickbait” to keep people “addicted and absorbed on the page as long as possible”.

Information online is “buried at the bottom of a lot of verbiage and guff”, Mr Suleyman argued, so websites can “sell more adverts”, fuelled by Google’s technology.

  • Square Singer@feddit.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    The issue with reliability is a completely different one between web search and AI.

    If you search something on Google, there are quite a few ways you can judge the quality of the answer with “metadata” around it. If you find a scientific paper, it’s probably more reliable than a post on a parents forum. If the source is a quality newspaper or Wikipedia, that’s also more on the reliable side, but some conspiracy theorist website is not. And if the source is some kind of forum or Q&A site, wrong answers often have comments under them that correct the error.

    Also, you can follow multiple links and take a wider sample on the topic that way.

    With AI that’s not possible. Whether it is wrong or correct, the AI will give you an answer in the exact same format, with the same self-confident tone. You basically need to know the correct answer to know whether the answer is correct.

    Sure, you can re-roll and ask it again, but that doesn’t make the result more likely to be correct.

    For example, I asked ChatGPT which Harry Potter chapter is the longest. It happily gave me a chapter, but it wasn’t the longest. So I asked again and again and again, and each time it gave me a new wrong answer, every time with made-up word counts.

    • Zeth0s@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      This is the reason I am suggesting people to give a try to perplexity.ai to understand how these tools will work in the near future. And why I don’t understand the reason I am downvoted for that.

      Current “free” chatgpt was created as a proof of concept, not as a finished, complete solution for humanity issues. What we have now is a showcase of llm, for openai to improve the product via human feedback, for everyone else to enjoy what is it already now, with all its limitations, an extremely useful tool.

      But this kind of LLM is intended to be a building block of the future solutions. To enable interactivity, summarization, analysis features within larger products with larger and more refined set of features.

      If you don’t have paid version of chatgpt, again, try perplexity.ai with the copilot feature, to see a (still imperfect, under development) proof of concept of the near future of AI assisted research.

      And more tools will come, that will make easier to navigate the huge amount of information that is the main issue of modern internet.

      For your specific case, gpt 3.5 has poor logical and mathematical capabilities. Gpt-4 is much better with that. But still, using a language model for math is almost never a good choice. What you’d need, is an llm able to access information from the internet and to have access to some math tool, such as python or Matlab. These options currently are available on chatgpt with plugin, but they are suboptimal. In the future you’ll have better product able to combine llm, focused internet search and math.

      We should focus on the future, not on the present when discussing AI. LLMs based products are in their infancy