Google for information about an owl found only in Australia.

First result, a blog claiming: “Ninox strenua is important to the Native Americans of the US”

Every single result is website that all have the same layout.

    • Dirt_Owl [comrade/them, they/them]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 days ago

      News websites were the first to do this even before AI tbh. All of them would copy paste the same article right down to the headline. Then top ten websites would do it.

      Now it’s every search result…

      • Enjoyer_of_Games [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        Most news organizations (this applies to traditional papers as well as sites) are just local outlets that buy most of their articles wholesale from news wire services and maybe have someone to reword it a little or maybe not and then slap on a stock photo. They might have one local journalist, maybe even a photographer to write some articles based on local police reports, gov disclosures and whatever is pitched to them by PR from local companies and promoters.

        Everyone forgets this but originally “fake news” referred to news websites that were basically fake versions of these local news sites made for towns that didn’t have a newspaper or towns that didn’t exist at all. These existed to either get clicks for ad revenue or to push completely fabricated stories for propaganda (almost always of the far right kind). Publishing on sites made to look like a small town newspaper website gave click bait articles a veneer of legitimacy relying on people to only read the headline on social media then click on the article to see if it looked vaguely legit before sharing it on. These sites would steal from other news sites or use earlier procedural generation (from before people would call this AI even though it’s nothing different or new) to create the filler fake content to prop up the illusion. Ofc then Trump with the aid of the entire media successfully managed to redefine the term fakenews to just mean any partisan news you don’t like.

        Listicle sites were basically the same thing. Initially review sites who then squeezed employees leading to writing lower and lower effort reviews based on little to no testing eventually becoming little more than reworded versions of the official product descriptions. Top ten lists could be easily monetized with affiliate links and get high search ranking since consumers were often just wanting someone to tell them what to buy. Automating these lists has been simple for a long time and since there is no real reviews around any more the average person will have a hard time figuring out if they are bullshit. The only way to slap on a veneer of legitimacy to these lists without actually doing the work of testing and writing real reviews is to attach them to a media brand with some reputation which led to the purchasing of legacy magazines to turn them into zombie listicle sites and eventually even major sites like NYT etc started making their own product listicle divisions.

        It’s very cool I love it.

  • Future_Honkey [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    3 days ago

    The Internet is becoming less useful and searching things (which i once took pride in being good at) is now a slow terrible process of siftin thru alla trash. Yeah I’m pretty mad about it

    • dat_math [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      3 days ago

      it is infuriating how heavily I have to lean on hard earned academic research skills to find numbers (not even fancy science parameters, but ingredient ratios in recipes) that even duckduckgo returned reliable sources for on the first page in fucking 2016

    • sooper_dooper_roofer [none/use name]@hexbear.net
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 days ago

      kinda wish search engines would be completely random

      like they’d just shuffle up all the results

      I already have to save anything in order to look at it later anyway, since half of all websites go down in a couple years’ time. Might as well get some neutrality and novelty exposure while I’m at it

  • LaGG_3 [he/him, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    27
    ·
    3 days ago

    How do you know that Native Americans didn’t visit Australia and be like “damn, that’s a great owl”?

    I’ve been coping with this shit by being ok with not totally knowing some things lol

  • halfpipe [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    16
    ·
    3 days ago

    We thought digital archives would be useless because people in the future would likely not have the technology to access them. How many computers even have CD players or floppy disk drives anymore? Turns out they will also be useless because they’re full of AI generated garbage.

    Meanwhile, archival paper lasts three centuries minimum. Much longer if it’s stored well.

  • stigsbandit34z [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    3 days ago

    I keep seeing leftists say there will always be places to go on the internet

    But servers are the modern day press

    lenin-heisenberg

    Who owns most servers? Amazon

    Funny, that

  • Kereru [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 days ago

    Vector search tech that all search engines rely on so heavily flattens everything to the lowest common denominator.

    You’re asking about Australian owls but from the usa? “You want information that relates aus owls to the usa, trust me bro.”

    • Orcocracy [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      14
      ·
      3 days ago

      Google generally privileges US-based sources and has always been shit at searching for some topics outside of the US. For example, if you want to search for where a TV show is streaming, Google for years has given results pointing to bullshit like Hulu or Peac*ck or other platforms that are only available in the US. To give a more rigorous example, this study focused on art ( https://academic.oup.com/dsh/article-abstract/36/3/607/6042086 ) also shows how Google is heavily biased towards US-based artworks and galleries.

      Google is and always has been an instrument of American (cultural) imperialism.

  • LLMao [mirror/your pronouns]@hexbear.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    Analysis of AI and Disinformation Through a Dialectical Materialism Framework

    1. The Two World Outlooks

    • Metaphysical Outlook: A static view might treat AI as a neutral tool, ignoring its dynamic role in both propagating and combating disinformation. For example, focusing solely on AI’s technical capabilities without addressing its socio-political impact.
    • Dialectical Outlook: Recognizes AI as a contradictory force—creating hyper-realistic disinformation (e.g., deepfakes via GANs) while simultaneously detecting it through automated fact-checking and blockchain verification . This duality reflects the interconnectedness of technological advancement and societal consequences.

    2. Universality of Contradiction

    AI’s role in disinformation is marked by inherent contradictions:

    • Creation vs. Detection: AI enables mass production of disinformation (e.g., synthetic text, deepfakes) but also powers tools to identify and mitigate it, such as NLP algorithms and bot-detection systems .
    • Accessibility: Generative AI lowers barriers for malicious actors (e.g., low-cost deepfake tools) while democratizing countermeasures (e.g., open-source detection software) .
    • Trust Erosion vs. Trust Building: AI-generated content fuels the “liar’s dividend” (sowing doubt in facts) but can also enhance transparency (e.g., blockchain for content provenance) .

    3. Particularity of Contradiction

    Each form of AI-driven disinformation has unique characteristics:

    • Deepfakes: Leverage GANs to manipulate audio/video, targeting political figures (e.g., fake Biden/Trump videos) .
    • Social Bots: Semi-automated accounts amplify divisive narratives, as seen in the 2016 U.S. elections .
    • Micro-Targeting: AI algorithms exploit user data to disseminate disinformation to vulnerable groups, such as vaccine skeptics during COVID-19 .
    • Automated Fact-Checking: Combines NLP and machine learning to flag false claims but struggles with sarcasm or cultural context .

    4. Principal Contradiction and Principal Aspect

    • Principal Contradiction: The arms race between AI-generated disinformation (speed, scale) and AI-based detection (accuracy, adaptability). For instance, while platforms deploy AI to flag fake news, adversarial actors refine deepfakes to evade detection .
    • Principal Aspect: The economic model of social media platforms, which prioritizes engagement over truth. AI-driven recommendation systems (e.g., YouTube’s algorithm) inadvertently promote divisive content to maximize user retention, exacerbating disinformation spread .

    5. Identity and Struggle of Contradictory Aspects

    • Identity: AI’s dual roles are interdependent. For example, the same NLP techniques used to generate fake news (e.g., ChatGPT) can train models to detect synthetic text .
    • Struggle: The tension between disinformation’s societal harm and free speech protections. Automated moderation tools risk over-censorship, while under-regulation allows disinformation to thrive .
    • Transformation: AI’s role evolves with context. During elections, deepfakes may dominate (antagonistic), while in public health crises, AI-driven chatbots can debunk myths (non-antagonistic) .

    6. Antagonism in Contradiction

    • Antagonistic: State-sponsored disinformation campaigns (e.g., Venezuela’s AI-generated news anchors) or AI-aided censorship (e.g., China’s chatbot restrictions) create systemic distrust and social fragmentation .
    • Non-Antagonistic: Collaborative efforts like the EU’s co-regulation model balance AI moderation with ethical safeguards, emphasizing transparency and digital literacy .

    7. Conclusion

    The interplay between AI and disinformation embodies dialectical materialism’s core principles:

    • Universality: Contradictions are inherent in AI’s dual role as creator and mitigator of disinformation.
    • Particularity: Each technological application (e.g., deepfakes, bots) demands context-specific solutions.
    • Fluidity: The principal contradiction shifts with material conditions (e.g., election cycles, pandemics).
    • Interconnection: Global platforms, regulatory frameworks, and societal resilience shape outcomes.

    Strategic Implications:

    • Ethical AI Development: Prioritize explainable AI (XAI) to reduce algorithmic bias and ensure transparency .
    • Regulatory Synergy: Combine AI detection with blockchain verification to enhance content provenance .
    • Societal Resilience: Invest in digital literacy to counteract cognitive heuristics (e.g., “seeing is believing”) exploited by deepfakes .

    Synthesis:
    AI’s entanglement with disinformation reflects the broader struggle between technological innovation and human agency. By addressing contradictions through adaptive regulation, ethical AI, and public education, societies can navigate this dialectical challenge while preserving democratic integrity .