• Ludrol@szmer.info
    link
    fedilink
    arrow-up
    27
    ·
    7 months ago

    In 2022 AI evolved into AGI and LLM into AI. Languages are not static as shown by old English. Get on with the times.

    • Fedizen@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      7 months ago

      Changes to language to sell products are not really the language adapting but being influenced and distorted

        • randomsnark@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          I think the modern pushback comes from people who get their understanding of technology from science fiction. SF has always (mis)used AI to mean sapient computers.

      • Echo Dot@feddit.uk
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        LLMs are a way of developing an AI. There’s lots of conspiracy theories in this world that are real it’s better to focus on them rather than make stuff up.

        There really is an amazing technological development going on and you’re dismissing it on irrelevant semantics

      • Aceticon@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        The acronym AI has been used in game dev for ages to describe things like pathing and simulation which are almost invariably algorithms (such as A* used for autonomous entities to find a path to a specific destination) or emergent behaviours (which are also algorithms were simple rules are applied to individual entities - for example each bird on a flock - to create a complex whole from many such simple agents, and example of this in gamedev being Steering Behaviours, outside gaming it would be the Game Of Life).

    • intensely_human@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      7 months ago

      They didn’t so much “evolve” as AI scared the shit out of us at such a deep level we changed the definition of AI to remain in denial about the fact that it’s here.

      Since time immemorial, passing a Turing test was the standard. As soon as machines started passing Turing tests, we decided Turing tests weren’t such a good measure of AI.

      But I haven’t yet seen an alternative proposed. Instead of using criteria and tasks to define it, we’re just arbitrarily saying “It’s not AGI so it’s not real AI”.

      In my opinion, it’s more about denial than it is about logic.