This issue is already quite widely publicized and quite frankly “we’re handling it and removing this” is a much more harmful response than I would hope to see. Especially as the admins of that instance have not yet upgraded the frontend version to apply the urgent fix.

It’s not like this was a confidential bug fix, this is a zero day being actively exploited. Please be more cooperative and open regarding these issues in your own administration if you’re hosting an instance. 🙏

  • @entropicshart@lemmy.world
    link
    fedilink
    English
    123
    edit-2
    1 year ago

    When a vulnerability at this level happens and a patch is created, visibility is exactly what you need.

    It is the reason CVE sites exist and why so many organizations have their own (e.g. Atlassian, SalesForce/Tableau )

    It is also why those CVE will be on the front page of sites like https://news.ycombinator.com to ensure folks are aware and taking precautions.

    Organizations that do not report or highlight such critical vulnerabilities are only hurting their users.

    • TragicNotCute
      link
      fedilink
      English
      691 year ago

      It is common practice to notify affected parties privately and then give full details to the public after the threat is largely neutralized. Expecting public disclosure with technical details on how to perform the attack in less than 24 hours goes against established industry norms.

      • Dark Arc
        link
        fedilink
        English
        471 year ago

        That only stands true when the issue is not being actively exploited.

        • 𝓢𝓮𝓮𝓙𝓪𝔂𝓔𝓶𝓶
          link
          fedilink
          English
          261 year ago

          I strongly disagree with some of your points.

          Yes, the vulnerability is out there. Maybe the root cause actually introduced a LOT of vulnerabilities. The fix is being pushed at a frantic pace. To expect the devs to take time out of the mad rush to notify those impacted to do a proper writeup is just insanity.

          It’s not insanity. It’s called incident management and it’s something the development team needs to build a proper procedure around, given the expanded scope of this project. I agree that the devs working on identifying, mitigating, and fixing the vulnerability should not be expected to also handle the communication. They need to designate someone for that role.

          A 0-day was actively being exploited in the wild. There was confusion, misinformation, and a general lack of information.

          You need to:

          • Indicate that you are aware of an ongoing problem and are working to identify it. This let’s people know there is an issue and that you’re aware of it. You can do this without giving specific details on how to replicate the exploit. This includes server admins publicly acknowledging that they are aware of the issue and will provide updates when they have them, to alleviate the concerns of their user base.
          • Once a mitigation are known, you publish that, in as many channels as you need to get that information out to the people who need it. So that server admins are aware of what they need to do to reduce their risk.
          • Once a fix is in place, you publish that, same as above.

          The way I see it? This (hopefully) got fixed pretty much instantly and there is active work to get the fix applied by the people who need to apply it. That is what should be done.

          And how do you know this since it’s not been communicated? Most of the information I (as a person running a lemmy server) have been able to glean is from random threads spread across random communities.

          Give it a week or two to see how they handle the public disclosure side of things.

          A couple of weeks for a postmortem. Sure. A couple of weeks for an active, in the wild, 0-day, to officially communicate that the problem exists and how to mitigate/patch it. Absolutely not. I still don’t see a security alert on the GitHub telling me I should be updating to <insert version> to patch an active exploit and it’s been how many hours now?

            • 𝓢𝓮𝓮𝓙𝓪𝔂𝓔𝓶𝓶
              link
              fedilink
              English
              61 year ago

              Is the project small? Yes.

              Did it explode in popularity leaving the devs overwhelmed? Certainly.

              Do I expect them to strictly follow established ITIL incident management? No.

              Do I expect them to communicate in a consistent way when an incident happens? Yes.

              I agree the primary developers should be left to fixing the problems but there are enough active members of that project that someone could have handled communication in a more concise and official way. I don’t consider random posts in asklemmy or selfhosting by random users just guessing to be a substitute for that.

              If the project is going to persist and grow it needs to get better at that. Pointing it out isn’t shitposting.

                • 𝓢𝓮𝓮𝓙𝓪𝔂𝓔𝓶𝓶
                  link
                  fedilink
                  English
                  61 year ago

                  I mean, there is a reason reddit hired so many people over the years. And if you are going to jump down the throats of people who prioritize fixing an issue and counting on “active members” to notify users over writing up the reports that many of those users won’t even look at? You want a production quality piece of software. That means Reddit or Threads or Bluesky.

                  Why are you getting so defensive? The only throat getting jumped down is mine, by you. I’m expressing my opinion of gaps in the communication of the project and how I think it can be improved. In a conversation thread on selfhosted no less. I’m not out in !lemmy@lemmy.ml bitching them out, submitting issues, or otherwise harassing the devs. Pointing out a gap and suggesting solutions is neither shitposting nor jumping down someone’s throat.

                  I’ve been through similar a decent number of times on the corporate side. Something has gone very wrong. People want answers. A good manager assesses the situation and responds back “Look, we know what is going on and all hands are on deck to fix it. Making a powerpoint is not fixing it. We’ll do a proper write up for next week but we can either have So and So fix it or report on it.”

                  I think you’re the one confusing this with a large corporate project. Not me. There’s no managers here, there’s no powerpoints, and at no point have I asked for a detailed write-up. I asked for someone on the project, who isn’t actively working on identifying and coding the fix, to be the “point man”. Post a simple sticky at the top of !lemmy@lemmy.ml xposted to !lemmy_support@lemmy.ml that indicates there’s a problem, they’re aware of it, and a fix it being worked on. Once mitigations are identified or fixes are published, update the post with that. Ideally, a github security incident would be also be published with the same info so people not watching lemmy at the moment can notified via that channel.

                  But people very much don’t seem to understand how small this project is. Spend time with passion projects and “open source” projects that AREN’T on the scale of a small-medium sized company and you understand that standards are going to be lower because people have day jobs and so forth.

                  I get it. I have pretty low standards. I’m just saying that a consistent communication strategy going forward for this project would be beneficial.

                  • @tko@tkohhh.social
                    link
                    fedilink
                    English
                    31 year ago

                    I’m with you. I figured out through various comments that I should update my UI to 0.18.2-rc.1, and also run an update statement on my database to fix the modlog. Only after that did I find the matrix channel. Eventually I also found !lemmy_admin@lemmy.ml which is great, but the only thread there on this issue doesn’t even mention updating the UI. I think if we can get to the point where critical information that admins need to know is consistently posted in one place, it’ll make everybody’s life easier. I don’t think that’s too much to ask.

          • @fuser@quex.cc
            link
            fedilink
            English
            21 year ago

            whilst I differ somewhat on sharing information on the exploit - knowing something about what was going on allowed some instance admins to take evasive steps - I agree with you completely that there could be a better channel for coordinating communication - I imagine a lot of the discussion went on via Matrix - under the circumstances the response wasn’t so bad given the complete lack of formal organization but yes, it definitely could be improved - you sound quite well-versed in how to handle security/critical incidents. Maybe consider contacting the devs and offering them some help in this area?

            • 𝓢𝓮𝓮𝓙𝓪𝔂𝓔𝓶𝓶
              link
              fedilink
              English
              31 year ago

              I don’t think I’m asking for a lot. A post on !lemmy@lemmy.ml xposted to !lemmy_support@lemmy.ml that gets pinned to the top. Edit the post when relevant information comes out. Release a security advisory on github as soon as you have enough info to warrant one and keep it up-to-date as well.

              I’m not asking for the troubleshooting to happen out in the open.

              you sound quite well-versed in how to handle security/critical incidents. Maybe consider contacting the devs and offering them some help in this area?

              I know enough. I’m certainly not an infosec guy I’m just a sysadmin who’s been doing this long enough to know what should be done. At least partly due to this there’s currently 400 open issues just in lemmy-ui on github. Right now I think the best most of us can do is wait for the dust to settle.

              • @fuser@quex.cc
                link
                fedilink
                English
                2
                edit-2
                1 year ago

                Right, but Lemmy.ml is really just one of a thousand plus instances. We need something instance independent or a way to propagate info that doesn’t rely on any single failure points, or Lemmy as the communication channel. What happens when lemmy.ml is down, or if no instances are able to post due to concerted DoS?

                It’s impossible to stop anyone randomly posting stuff on Lemmy. Attackers can post misinformation as well, especially if they compromise admin accounts. Who are we gonna trust in the midst of the next incident? The account posting most prolifically about the UI exploit in progress was using a burner account that had just been created to post about it. I’m sure there were good reasons for wanting to be anonymous when discussing the work of unknown malicious actors, but it made me think twice about what was being posted at the time.

                  • @fuser@quex.cc
                    link
                    fedilink
                    English
                    11 year ago

                    checks all the boxes - authoritative (authenticated user accounts), central location, not on fediverse, already relatively well-known by lemmy users and provides visibility to remediation. It’s a good idea.

        • @Goodie@lemmy.world
          link
          fedilink
          English
          41 year ago

          Your typical dev is not a technical writer, and shouldn’t be doing the proper write-up.

          If you feel (and it seems you do) that this skill is missing from the Lemmy team, perhaps you should volunteer some time.