• sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    arrow-up
    25
    ·
    1 year ago

    Yeah, this is why I can’t really take anyone seriously when they say it’ll take over the world. It’s certainly cool, but it’s always going to be limited in usefulness.

    Some areas I can see it being really useful are:

    • generating believable text - scams, placeholder text, and general structure
    • distilling existing information - especially if it can actually cite sources, but even then I’d take it with a grain of salt
    • trolling people/deep fakes

    That’s about it.

    • Ech@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      1 year ago

      generating believable text - scams, placeholder text, and general structure

      LLM generated scams are going to such problem. Quality isn’t even a problem there as they specifically go for people with poor awareness of these scams, and having a bot that responds with reasonable dialogue will make it that much easier for people to buy into it.

    • AI tools can be very powerful, but they usually need to be tailored to a specific use case by competent people.

      With LLMs it seems to be the opposite, where people not competent for ML are applying it for the broadest of use cases. Just that it looks so good they are easily fooled and lack the understanding to realize the limits.

      But there is a very important Usecase too:

      Writing stuff that is only read and evaluated by similiar AI tools. It makes sense to write cover letters with ChatGPT because they are demanded but never read by a human on the other side of the job application. Since the weights and stuff behind it serm to be similiar, writing it with ChatGPT helps to pass the automatic analysis.

      Rationally that is complete nonsense, but you basically need an AI tool to jump through the hoops made by an AI tool applied by stupid people who need to make themselves look smart.