How can one configure their Lemmy instance to reject illegal content? And I mean the bad stuff, not just the NSFW stuff. There are some online services that will check images for you, but I’m unsure how they can integrate into Lemmy.

As Lemmy gets more popular, I’m worried nefarious users will post illegal content that I am liable for.

  • snowe@programming.dev
    link
    fedilink
    arrow-up
    8
    ·
    2 days ago

    You can set up Cloudflare as your CDN and turn on CSAM detection. It will automatically block links to known CSAM from the managed global CSAM hash lists.

    If you want something in addition to that, you can use db0’s plugin that adds in a similar capability.

    • Onno (VK6FLAB)@lemmy.radio
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      2 days ago

      From your description it’s unclear, does this also block CSAM that’s physically on your infrastructure, or just any links to external content?

      CloudFlare is currently attempting to block LLM bots and doing a shit job at it. I’m guessing that any CSAM blocking would be incomplete at best.

      What happens if some “gets through”, or if non-CSAM content is blocked, both materially, as-in, what happens, and, what are the legal implications, since I doubt that CloudFlare would ever assume liability for content on your infrastructure.

      Edit: I thought I’d also point out that this is not the only type of content that could get you into a legal black hole. For example, if a post was made that circumvented a legal ruling, say when a court in Melbourne, Australia, makes a suppression order that someone breaches. Or if defamatory content was published, etc.

      • snowe@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        18 hours ago

        It blocks access to the link on your site. For example, on programming.dev people have uploaded CSAM. The links are immediately blocked (e.g. no one can get to them except an instance owner actually looking in the pictrs database) and then in the CF dashboard you get a notification with the link to the webpage it occurred on.

        CSAM blocking works based on a known agreed upon, shared hash list which is created by a consortium of large tech giants. If novel CSAM is uploaded to your instance, then yes, it will fail to catch that. db0’s plugin might catch it though. LLM blocking doesn’t have the benefit of a bunch of multi billion dollar companies trying to stop it, in fact they’re doing the exact opposite, so yes LLM blocking sucks.

        For your edit, I would expect you to have an email set up that you would get the notice from. You are not responsible for this kind of stuff until you have been notified, pretty much globally, so pay attention to your email.

  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    arrow-up
    8
    ·
    2 days ago

    I am not a lawyer and I don’t play one on the internet.

    To my understanding the process is only prevented by controlling who can have an account on your instance.

    That said, it’s not clear to me how federated content is legally considered.

    The only thing I can think of is running a bot on your instance that uses the API of a service such as what you mention to deal with such images.

    Your post is the first one I’ve seen recently that is even describing the issue of liability, but it’s in my opinion the single biggest concern that exists in the fediverse and it’s why I’ve never hosted my own instance.