As soon as Apple announced its plans to inject generative AI into the iPhone, it was as good as official: The technology is now all but unavoidable. Large language models will soon lurk on most of the world’s smartphones, generating images and text in messaging and email apps. AI has already colonized web search, appearing in Google and Bing. OpenAI, the $80 billion start-up that has partnered with Apple and Microsoft, feels ubiquitous; the auto-generated products of its ChatGPTs and DALL-Es are everywhere. And for a growing number of consumers, that’s a problem.

Rarely has a technology risen—or been forced—into prominence amid such controversy and consumer anxiety. Certainly, some Americans are excited about AI, though a majority said in a recent survey, for instance, that they are concerned AI will increase unemployment; in another, three out of four said they believe it will be abused to interfere with the upcoming presidential election. And many AI products have failed to impress. The launch of Google’s “AI Overview” was a disaster; the search giant’s new bot cheerfully told users to add glue to pizza and that potentially poisonous mushrooms were safe to eat. Meanwhile, OpenAI has been mired in scandal, incensing former employees with a controversial nondisclosure agreement and allegedly ripping off one of the world’s most famous actors for a voice-assistant product. Thus far, much of the resistance to the spread of AI has come from watchdog groups, concerned citizens, and creators worried about their livelihood. Now a consumer backlash to the technology has begun to unfold as well—so much so that a market has sprung up to capitalize on it.


Obligatory “fuck 99.9999% of all AI use-cases, the people who make them, and the techbros that push them.”

  • @umbrella@lemmy.ml
    link
    fedilink
    85 months ago

    the solution here is not being luddites, but taking the tech to ourselves, not put it into the hands of some stupid techbro who only wants to see line go up.

    • @TheFriar@lemm.ee
      link
      fedilink
      9
      edit-2
      5 months ago

      But that’s the point. It’s already in their hands. There is no ethical and helpful application of AI that doesn’t go hand in hand with these assholes having mostly s monopoly on it. Us using it for ourselves doesn’t take it out of their hands. Yes, you can self-host your own and make it helpful in theory but the truth is this is a tool being weaponized by capitalists to steal more data and amass more wealth and power. This technology is inextricable from the timeline we’re stuck in: vulture capitalism in its latest, most hostile stages. This shit in this time is only a detriment to everyone else but the tech bros and their data harvesting and “disrupting” (mostly of the order that allowed those “less skilled” workers among us to survive, albeit just barely). I’m all for less work. In theory. Because this iteration of “less work” is only tied to “more suffering” and moving from pointless jobs to assistant to the AI taking over pointless jobs to increase profits. This can’t lead to utopia. Because capitalism.

      • Barry Zuckerkorn
        link
        fedilink
        15 months ago

        To put it in more simple terms:

        When Alice chats with Bob, Alice can’t control whether Bob feeds the conversation into a training data set to set parameters that have the effect of mimicking Alice.

    • @kibiz0r@midwest.social
      link
      fedilink
      English
      35 months ago

      So, literally the story of the actual Luddites. Or what they attempted to do before capitalists poured a few hundred bullets into them.