most of the time you’ll be talking to a bot there without even realizing. they’re gonna feed you products and ads interwoven into conversations, and the AI can be controlled so its output reflects corporate interests. advertisers are gonna be able to buy access and run campaigns. based on their input, the AI can generate thousands of comments and posts, all to support your corporate agenda.

for example you can set it to hate a public figure and force negative commentary into conversations all over the site. you can set it to praise and recommend your latest product. like when a pharma company has a new pill out, they’ll be able to target self-help subs and flood them with fake anecdotes and user testimony that the new pill solves all your problems and you should check it out.

the only real humans you’ll find there are the shills that run the place, and the poor suckers that fall for the scam.

it’s gonna be a shithole.

  • hardypart@feddit.de
    link
    fedilink
    arrow-up
    37
    ·
    1 year ago

    I actually think this is the fate of the entire corporate driven part of the internet (so basically 95% nowadays, lol). Non-corporate, federated platforms are the future and will remain as the bastions of actual human interaction while the rest of the internet is being FUBAR by large language model bots.

    • mrbubblesort@kbin.social
      link
      fedilink
      arrow-up
      21
      ·
      1 year ago

      Seriously asking, what makes you think the fediverse is immune to that? Eventually they’ll get good enough that they’ll be almost indistinguishable from normal users, so how can we keep the bots out?

      • rastilin@kbin.social
        link
        fedilink
        arrow-up
        16
        ·
        1 year ago

        There’s a number of options including a chain of trust where you only see comments from someone who’s been verified by someone who’s been verified by someone and so on who’s been verified by an actual real human that you’ve met in person. We can also charge per post, which will rapidly drive up the cost of a botnet (as well as trim down the number of two word derails).

        • BraveSirZaphod@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          I’m not sure how reliable chains of trust would be. There’s a pretty obvious financial incentive for someone to simply lie and vouch for a bot etc. But in general, I think some kind of network of trustworthiness or verification as a real human will eventually be necessary. I could see PGP etc being useful.

        • archomrade [he/him]@midwest.social
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          “charge per post”

          That part kind of worries me, are you proposing charging users to participate in the fediverse? Seems like it would also exclude a lot of people who can’t afford to spend money on social media…

          • riskable@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Listen here, you! I paid good money for this here comment so you’re gonna read it, alright‽

            <Brought to you by FUBAR, a corporation with huge pockets that can afford to sway opinion with lots of carefully placed bot comments>

          • rastilin@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            The obvious question is then “how are they helping pay for the servers they’re using?”.

            It’s not that I don’t see your point, everyone should be able to take part in a community without having to spend money, but I do find it annoying that whenever the topic of money comes up, we end up debating the hypothetical of someone with 0c spare in their budget.

            Charging for membership worked well for Something Awful, and they only charge something like $20 for lifetime membership anyway, plus an additional fee for extra functionality. But you don’t get the money back if you get banned. Corporations would still be able to spend their way into the conversation, but it would be harder to create massive networks that just flood the real users.

            • archomrade [he/him]@midwest.social
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              The nice thing about federated media is that there doesn’t need to be one instance that carries most of the traffic. The cost gets distributed among many servers and instances, and they can choose how to fund the server independently (many instance owners spend their own money to a point, then bridge the gap with donations from users).

              I’m just not sure that’s the best way to cut down bots, IMHO.

      • apemint@kbin.social
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        It’s not immune but until the fediverse reaches a critical mass, we’re safe… probably.
        After that, it will be the same whac-a-mole game we’re used to and somehow I don’t think we’ll win.

      • CynAq@kbin.social
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        Right now, we can already recognize lower quality bots within conversation. AI generated “art” is already very distinct to everyone to the point almost nobody misses it.

        Language is a human instinct. Our minds create it, we can use it in all sorts of ways, bend it to our will however we want.

        By the time bots become good enough to be indistinguishable online, they’ll either be actually worth talking to, or they will simply be another corporate shill.

        • MrsEaves@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          I was wondering about this myself. If a bot presents a good argument that promotes discussion, is the presence of a bot automatically bad?

          I don’t love that right now, the focus is on eliminating or silencing the voice of bots, because as you point out, they’re going to be indistinguishable from human voices soon - if they aren’t already. In the education space, we’re already dealing with plagiarism platforms incorrectly claiming real student work is written by ChatGPT. Reading a viewpoint you disagree with and immediately jumping to “bot!” only serves to create echo chambers.

          I think it’s better and safer long term to educate people to think critically, assume good intent, know their boundaries online (ie, don’t argue when you can’t be coherent about it and have to devolve to name calling, etc), and focus on the content and argument of the post, not who created it - unless it’s very clear from a look at their profile that they’re arguing in bad faith or astroturfing. A shitty argument won’t hold up to scrutiny, and you don’t have the risk of silencing good conversation from a human with an opposing viewpoint. Common agreement on community rules such as “no hate speech” or limiting self-promotion/review/ads to certain spaces and times is still the best and safest way to combat this, and from there it’s a matter of mods enforcing the boundaries on content, not who they think you are.

          • Aesthesiaphilia@kbin.social
            link
            fedilink
            arrow-up
            13
            ·
            1 year ago

            I was wondering about this myself. If a bot presents a good argument that promotes discussion, is the presence of a bot automatically bad?

            Because bots don’t think. They exist solely to push an agenda on behalf of someone.

          • BraveSirZaphod@kbin.social
            link
            fedilink
            arrow-up
            5
            ·
            1 year ago

            If a bot presents a good argument that promotes discussion, is the presence of a bot automatically bad?

            If the people involved in the conversation are there because they are intending to have a conversation with people, yes, it’s automatically bad. If I want to have a conversation with a chatbot, I can happily and intentionally head over to ChatGPT etc.

            Bots are not inherently bad, but I think it’s imperative that our interactions with them are transparent and consensual.

          • Umbrias@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Part of the problem is that bots unfairly empower the speech of those with resources to dominate and dictate the conversation space, even in good effort, it disempowers everyone else. Even the act ofseeing the same ideas over and over can sway whole zeitgeists. Now imagine what bots cab do by dictating the bulk of what’s even talked about at all.

      • TheRazorX@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Nothing is immune, but at least on the fediverse it’s unlikely API access will be revoked on tools used to detect said bots.

      • bug
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Throw in a captcha every now and again maybe?

    • taurentipper@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      I agree with you 100%. If their motive is to make profit for shareholders or themselves they’re imo inevitably going to do this.