Twitter co-founder Jack Dorsey’s financial services company Block has announced it will fire 40 percent of staff – around 4,000 people – because new “intelligence tools” the company is implementing “can do more and do it better.”

The company announced the sackings in the shareholder letter [PDF] accompanying its Q4 earnings announcement on Thursday. The payments and crypto company reported quarterly revenue of about $6.25 billion – up 3.6 percent year-over-year – and gross profit of around $2.9 billion. The company made $1 billion of gross profit in December 2025 alone. Full-year revenue came in at about $24.2 billion, and gross profit was around $10.36 billion.

“2025 was a strong year for us,” Dorsey wrote in the shareholder letter, before posing the question, “Why are we changing how we operate going forward?”

His answer, spread across the letter and a Xeet, is that AI has already changed the way Block works, so it needs to change its structure.

“We’re already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that’s accelerating rapidly,” he wrote on X.

  • Ludicrous0251@piefed.zip
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 days ago

    Can we start placing bets on when we find out that the “AI tools” they’re using are just sweat shop workers in Bangladesh processing invoices?

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      Eh, I know this is the anti-AI instance, but reading and interpreting things like that is something you can verifiably get AI to do 90% of the time.

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 days ago

          I’ve got an idea. If 90% of AI’s output is accurate, just have humans review the 10% that will be inaccurate.

          (Yes I am an AI expert, how did you know)

          • TehPers@beehaw.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            17 hours ago

            Which outputs are accurate, and which ones are inaccurate? How could you tell? What steps did you take to verify accuracy? Was verifying it a manual process?

            • XLE@piefed.social
              link
              fedilink
              English
              arrow-up
              2
              ·
              17 hours ago

              That’s easy. You just get a second AI to ask the first AI if their responses were accurate or not

              (/s)

              • TehPers@beehaw.org
                link
                fedilink
                English
                arrow-up
                2
                ·
                16 hours ago

                This is unironically what I’ve seen people try to do, except they assume the second AI is correct.

                Unrelated, but this is how GANs work to some extent. GANs train during the back-and-forth though, while LLMs do not.

                • XLE@piefed.social
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  12 hours ago

                  That’s also basically how thinking models work too, isn’t it? And probably the new GPT-5 router, which everybody hates…

                  • TehPers@beehaw.org
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    11 hours ago

                    Not exactly. Thinking models just inflate the context window to point the model closer to your target. GANs have two models which compete against each other, both training each other, with the goal of one (or both) of those models being improved over time.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          No, it’s really not. Thus the 6000 remaining employees.

          (Assuming this is a significant part of their business)

      • scintilla@crust.piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Hell USPS has been using machine learning (yes a kind of AI but not the kind they are implying) for years to do that kind of thing.

        • Paradox@lemdro.id
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          Kind of

          They’ve had several address resolution centers around the country, where reviewers look at mail and figure out it’s address. They don’t physically handle the mail, it’s an image on a screen.

          Iirc they’ve been doing it this way since the 70s

          • scintilla@crust.piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            No? For everything they can they just use OCR and then send it on its way without a human having ever seen it sometimes. If the hand writing is bad enough that the machine can’t figure it out that’s where the human reviewers come in.