• Rentlar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    142
    ·
    9 months ago

    Lemmy server operators can now say they have better 24h uptime than Meta! lol

  • modifier@lemmy.ca
    link
    fedilink
    English
    arrow-up
    139
    ·
    edit-2
    9 months ago

    I am feeling a lot of personal satisfaction that I had no idea this was happening and had to read about it on Lemmy.

    • kinther@lemmy.world
      link
      fedilink
      English
      arrow-up
      41
      ·
      9 months ago

      Someone is having a really bad day today. I wonder if your phone dies when you get a certain number of pages or push notifications

      • Alk@lemmy.world
        link
        fedilink
        English
        arrow-up
        42
        ·
        edit-2
        9 months ago

        Fun story. I had a flip phone years ago and you could have multiple recipients to a single text. And if the text was multiple pages, it would split into several texts. And you could resend already sent texts.

        So one time I put in my girlfriend’s phone number in all 20 recipient slots. I then filled the text to the max size, though I don’t remember how many it split into. I then resent it over and over. This all took like 2 or 3 minutes.

        Her phone was sending notifications over and over for the entire rest of the day. I’d guess at least 8 hours, probably more.

      • maynarkh@feddit.nl
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 months ago

        No but it’s unusable. I had a weird bug on one of my phones that sent an SMS over as fast as it could as long as the phone was on. I wrote the initial SMS, the contents were something like “hey, wanna hang?”, and the poor guy on the other side was blasted for several hours of literally constant notifications.

        Luckily my plan at the time had unlimited free SMS.

    • a lil bee 🐝@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      9 months ago

      Looking at the downmeter shot someone posted above, it’s half the SREs in the country. Not sure what the root cause will be, but damn that’s a lot of money down the tubes. I would not want to be the person who cost Meta and Google their precious thirty 9’s of availability lol.

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        It’s likely there’s a root cause, like a fiber cut or some other major infrastructure issue. But, Down Detector doesn’t really put a scale on their graphics, so it could be that it’s a huge issue at Meta and a minor issue that’s just noticeable for everyone else. In that case, Meta could be the root cause.

        If everyone is mailing themselves their passwords, shutting their phones on and off, restarting their browsers, etc. because Meta wasn’t working, it could have knock-on effects for everyone else. Could also be that because Meta is part of the major ad duopoly, the issue affected their ad system, which affected everyone interacting with a Meta ad, which is basically everyone.

        • a lil bee 🐝@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          I’ve been an SRE for a few large corps, so I’ve definitely played this game. I’m with you that it was likely just the FB identity or ad provider causing most of these issues. So glad I’m out of that role now and back to DevOps, where I’m no longer on call.

          • merc@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            Yeah. And when the outage is due to something external, it’s not too stressful. As long as you don’t have absolutely insane bosses, they’ll understand that it’s out of your control. So, you wait around for the external system to be fixed, then check that your stuff came back up fine, and go about your day.

            I personally liked being on call when the on-call compensation was reasonable. Like, on-call for 2 12-hour shifts over the weekend? 2 8-hour days off. If you were good at maintaining your systems you had quiet on-call shifts most of the time, and you’d quickly earn lots of days off.

            • a lil bee 🐝@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              Yeah I’d be less worried about internal pressures (which should be minimal at a halfway decently run org) and more about the externals. I don’t think you would actually end up dealing with anything, but I’d know those reliant huge corps are pissed.

              Man, your on-call situation sounds rad! I was salaried and just traded off on-call shifts with my team members, no extra time off. Luckily though, our systems were pretty quiet so it hardly ever amounted to much.

              • merc@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 months ago

                I think you want people to want to be on call (or at least be willing to be on call). There’s no way I’d ever take a job where I was on-call and not compensated for being on-call. On-call is work. Even if nothing happens during your shift, you have to be ready to respond. You can’t get drunk or get high. You can’t go for a hike. You can’t take a flight. If you’re going to be so limited in what you’re allowed to do, you deserve to be compensated for your time.

                But, since you’re being compensated, it’s also reasonable that you expect to have to respond to something. If your shifts are always completely quiet, either you or the devs aren’t adding enough new features, or you’re not supervising enough services. You should have an error budget, and be using that error budget. Plus, if you don’t respond to pages often enough, you get rusty, so when there is an event you’re not as ready to handle it.

      • mesamune@lemmy.world
        link
        fedilink
        English
        arrow-up
        39
        ·
        edit-2
        9 months ago

        downdetector

        Looks like it may have been AWS or something. All kinds of services were down a moment ago. Guess thats what happends when everything is on major cloud services.

        • khannie@lemmy.world
          link
          fedilink
          English
          arrow-up
          24
          ·
          9 months ago

          Google have their own data centres (and cloud) so it may be something more in the connectivity area.

          • mesamune@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            9 months ago

            Maybe, I would expect redundancy. But ultimately I have no clue. I just remember the last time AWS went down. It seemed that a majority of the sites that I used daily were down all in one go.

            • neatchee@lemmy.world
              link
              fedilink
              English
              arrow-up
              13
              ·
              9 months ago

              Sometimes redundancy doesn’t help when it comes to network traffic routing. That system is based heavily on trust and an incorrect route being published can cause recursive loops and such that get propagated very quickly to everyone.

              There was a case like this a few years back where a bad route got published by a small ISP, claiming they could handle traffic to a certain set of destinations, but then immediately trying to send that traffic back out again (because they couldn’t actually route to that destination), which bounced right back to them because of the bad route. It was propagated based on implicit trust and took down huge chunks of the Internet for a while

        • merc@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 months ago

          Infrastructure seems likely, but probably not AWS because it affected Google and Facebook so strongly. If it were AWS you’d see Amazon getting badly affected and AWS itself, followed by everyone who relies on AWS for infrastructure.

        • i_ben_fine
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          I don’t think any major news sources confirm your theory.

          • soggy_kitty@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 months ago

            BBC isn’t a major news source? Remember when they say countries do not confirm, it’s politically motivated. What governments choose to share is up to them and it does not confirm what their intelligence agency actually thinks.

    • LostXOR@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      9 months ago

      Yeah there’s definitely some sort of major outage going on. Google Play Store is having some problems for me currently too.

  • Holzkohlen@feddit.de
    link
    fedilink
    English
    arrow-up
    43
    ·
    9 months ago

    Pathetic. My single podman container has perfect uptime from when I start it manually with podman desktop to when I shut down my PC. I also allow only the highest security standard, it being not accessible outside of my network and all that. I am clearly a cyber security expert.

  • kratoz29@lemm.ee
    link
    fedilink
    English
    arrow-up
    25
    ·
    9 months ago

    You know it is big news when it shows up in the active feed of Lemmy 😅

    Why it didn’t appear in the wholesome news community though.