• Fredselfish@lemmy.world
    link
    fedilink
    English
    arrow-up
    97
    ·
    1 year ago

    Wow pocket change. Why fines always so little. This be like fining the average person 3 cents. This will not stop them from just doing it again.

    Needs to be a large percentage of thier gross wealth. Musk one riches men in the world and fine him billions.

  • Pxtl@lemmy.ca
    link
    fedilink
    English
    arrow-up
    23
    ·
    1 year ago

    Remember when all the Musk fanboys were claiming that Musk cleaned up the CSAM and anybody who opposed him was obviously a pedophile? Pepperidge Farm remembers.

  • iamtheplatypus@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Musk won’t pay — like all other fines (invoices, rent), he commands his CEO to ignore them. He’ll say it’s “too high.”

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    This is the best summary I could come up with:


    SYDNEY, Oct 16 (Reuters) - An Australian regulator has fined Elon Musk’s social media platform X A$610,500 ($386,000) for failing to cooperate with a probe into anti-child abuse practices, a blow to a company that has struggled to keep advertisers amid complaints it is going soft on moderating content.

    Though small compared to the $44 billion Musk paid for the website in October 2022, the fine is a reputational hit for a company that has seen a continuous revenue decline as advertisers cut spending on a platform that has stopped most content moderation and reinstated thousands of banned accounts.

    Most recently the EU said it was investigating X for potential violation of its new tech rules after the platform was accused of failing to rein in disinformation in relation to Hamas’s attack on Israel.

    “If you’ve got answers to questions, if you’re actually putting people, processes and technology in place to tackle illegal content at scale, and globally, and if it’s your stated priority, it’s pretty easy to say,” Commissioner Julie Inman Grant said in an interview.

    Under Australian laws that took effect in 2021, the regulator can compel internet companies to give information about their online safety practices or face a fine.

    Inman Grant said the commission also issued a warning to Alphabet’s (GOOGL.O) Google for noncompliance with its request for information about handling of child abuse content, calling the search engine giant’s responses to some questions “generic”.


    The original article contains 625 words, the summary contains 239 words. Saved 62%. I’m a bot and I’m open source!

    • squiblet@kbin.social
      link
      fedilink
      arrow-up
      17
      ·
      edit-2
      1 year ago

      Fediverse is a bunch of independent websites potentially connected by compatible software, not one entity, so there’s not really a basis for comparison. You could ask about individual instances. But also it’s about “failing to cooperate with a probe into anti-child abuse practices”, not hosting or failing to moderate material. Australian law says they can asks sites about their policies and they have to at least respond.

      • blazera@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        The article has their response. Given their warning to google as well, apparently the responses also have to be good enough for them.

        • squiblet@kbin.social
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          They said

          X’s noncompliance was more serious, the regulator said, including failure to answer questions about how long it took to respond to reports of child abuse, steps it took to detect child abuse in livestreams and its numbers of content moderation, safety and public policy staff.

          So yes, all the questions need to be at least addressed and probably saying “we don’t do that because Elron doesn’t care about it” wouldn’t suffice either.

              • blazera@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                because we’ve gone in a circle of me asking if the site we are on right now is doing anything better with regards to this problematic material, since folks seem to care about Twitters failure to address it themselves. You respond that it’s not about their lack of addressing the material, but they’re lack of a response to the regulatory inquiry. I point out that they did respond, and your response is that oh they actually need to have a good answer of how they are addressing the material. Which is the same premise as the article and what my first comment was about. It’s hypocrisy, because the standard isnt being applied to the fediverse, no one is up in arms about our lack of automatic detection of problematic material or surveillance of private messaging. Because we care about privacy when we’re not being blinded by well intentioned Musk hate.

                • squiblet@kbin.social
                  link
                  fedilink
                  arrow-up
                  4
                  ·
                  edit-2
                  1 year ago

                  I posted from the article that they didn’t respond to several questions:

                  X’s noncompliance was more serious, the regulator said, including failure to answer questions about how long it took to respond to reports of child abuse, steps it took to detect child abuse in livestreams and its numbers of content moderation, safety and public policy staff.

                  I speculated that probably they also need adequate responses, but that’s not what the article or the fine is about.

                  If one of the individual sites in the Fediverse was asked by Australian regulators, I bet they’d respond fully. It’s not quite the same situation as Twitter, either - none of these sites are large enough to require many staff members, and don’t have their own live streaming platform.

    • Doctor xNo@r.nf
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 year ago

      Since there is no hierarchical top general moderator/admin and every instance is under supervision by the respective owners of these instances, responsibility of safety is technically forwarded to individual instance admins as far as their instance goes. Or that’s what I make of it at least, anyone feel free to correct me if I’m wrong. Also, the above conclusion does not include any possible random future law made up to state differently (decision-making entities have weird unpredictable logics… 😅)

      As far as for Mastodon itself, it could use some upgrades in its user management and reporting features, though (an option to automate instant reactions (like tempban until reviewed) on certain categories of reports (like child abuse and extreme/shocking violence) to prevent anyone reported for those kinds of things actively being able to continue until an admin sees and processes the report and reports are definitely not visible enough yet).

      • blazera@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        And things like automatic detection and direct message surveillance like these regulators are asking for?

        • Doctor xNo@r.nf
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Well, if those become necessary I’ll just have to add Mastodon, along with anything known too well, in the bin for government-ruined software and start using hidden services… I will never willfully comply to spyware, not even (read: especially not) government-approved ones.

          I have no idea if Mastodon has any plans adding those to the instance software though… Probably will if they get lawfully obligated I suppose, but I still sincerely hope not (as I still also sincerely hope this proposal gets dismissed for the obvious contradicting privacy laws it breaks and the vulnerabilities of backdoors).

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 year ago

      “The Fediverse” is about 13,000 separate services that are each individually responsible for illegal content on their systems. Some probably aren’t doing a good enough job, but most of them are and they’ve mostly defederated the ones that fail to do so.

      And why wouldn’t they? Many hands make light work and the fediverse has tens of thousands of moderators to deal with far fewer posts that the X network. Twitter had a decent moderation team once, but Musk has gutted the team.