Twitter co-founder Jack Dorsey’s financial services company Block has announced it will fire 40 percent of staff – around 4,000 people – because new “intelligence tools” the company is implementing “can do more and do it better.”

The company announced the sackings in the shareholder letter [PDF] accompanying its Q4 earnings announcement on Thursday. The payments and crypto company reported quarterly revenue of about $6.25 billion – up 3.6 percent year-over-year – and gross profit of around $2.9 billion. The company made $1 billion of gross profit in December 2025 alone. Full-year revenue came in at about $24.2 billion, and gross profit was around $10.36 billion.

“2025 was a strong year for us,” Dorsey wrote in the shareholder letter, before posing the question, “Why are we changing how we operate going forward?”

His answer, spread across the letter and a Xeet, is that AI has already changed the way Block works, so it needs to change its structure.

“We’re already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that’s accelerating rapidly,” he wrote on X.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      I’ve got an idea. If 90% of AI’s output is accurate, just have humans review the 10% that will be inaccurate.

      (Yes I am an AI expert, how did you know)

      • TehPers@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        22 hours ago

        Which outputs are accurate, and which ones are inaccurate? How could you tell? What steps did you take to verify accuracy? Was verifying it a manual process?

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          22 hours ago

          That’s easy. You just get a second AI to ask the first AI if their responses were accurate or not

          (/s)

          • TehPers@beehaw.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            21 hours ago

            This is unironically what I’ve seen people try to do, except they assume the second AI is correct.

            Unrelated, but this is how GANs work to some extent. GANs train during the back-and-forth though, while LLMs do not.

            • XLE@piefed.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              17 hours ago

              That’s also basically how thinking models work too, isn’t it? And probably the new GPT-5 router, which everybody hates…

              • TehPers@beehaw.org
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                16 hours ago

                Not exactly. Thinking models just inflate the context window to point the model closer to your target. GANs have two models which compete against each other, both training each other, with the goal of one (or both) of those models being improved over time.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      No, it’s really not. Thus the 6000 remaining employees.

      (Assuming this is a significant part of their business)