ChatGPT-maker OpenAI has said it considered alerting Canadian police last year about the activities of a person who months later committed one of the worst school shootings in the country’s history.

OpenAI said last June the company identified the account of Jesse Van Rootselaar via abuse detection efforts for “furtherance of violent activities”.

The San Francisco tech company said on Friday it considered whether to refer the account to the Royal Canadian Mounted Police (RCMP) but determined at the time that the account activity did not meet a threshold for referral to law enforcement.

OpenAI banned the account in June 2025 for violating its usage policy.

  • GameGod@lemmy.ca
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    4 days ago

    I think this should piss off a lot of people. Instead of doing something, they opted to do nothing, and now they’re exploiting the tragedy as a PR opportunity. They’re trying to shape their public image as an all-powerful arbiter. Worship the AI, or they will allow death to come to you and your family.

    Or perhaps this is all just rage bait, to get us talking about this piece of shit company, to postpone the inevitable bursting of the AI bubble.

    Edit: This is a sales pitch from OpenAI to the RCMP, with them saying they’ll sell police forces an intelligence feed. It just comes across as horribly tone deaf and is problematic for so many reasons.

    • non_burglar@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      3 days ago

      I understand your point, but there are also legal ramifications and scary potential consequences should this have transpired.

      For instance, do we want ICE to have access to data about user behaviour? They might already have that.

      Who decides the bar of acceptable behaviour?

      • hector@lemmy.today
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        Peter Thiel and his ilk decide acceptable behavior with our politicians and their appointees sadly. Officials will also be given ways to put names that they don’t like in the categories of those that get bad scores too, even if they don’t qualify by their own rules to be in those categories, that is always one of the selling points to the authorities.

      • GameGod@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        3 days ago

        I’m confident that ICE and other US law enforcement agencies already have access to it. There is no presumption of privacy on anything you enter into any cloud-based LLM like ChatGPT, or even any search engine.

        The consequences are already there and have been for like 15 years.