• @conditional_soup@lemm.ee
    link
    fedilink
    English
    2421 year ago

    I’d like to know why exactly the board fired Altman before I pass judgment one way or the other, especially given the mad rush by the investor class to re-instate him. It makes me especially curious that the employees are sticking up for him. My initial intuition was that MSFT convinced Altman to cross bridges that he shouldn’t have (for $$$$), but I doubt that a little more now that the employees are sticking up for him. Something fucking weird is going on, and I’m dying to know what it is.

    • @scarabic@lemmy.world
      link
      fedilink
      English
      85
      edit-2
      1 year ago

      Wanting to know why is reasonable but it’s sus that we don’t already know. Why haven’t they made that clear? How did they think they could do this without a solid explanation? Why hasn’t one been delivered to set the rumors to rest?

      It stinks of incompetence, or petty personal drama. Otherwise we’d know by now the very good reason they had.

    • @los_chill@programming.dev
      link
      fedilink
      English
      57
      edit-2
      1 year ago

      Altman wanted profit. Board prioritized (rightfully, and to their mission) responsible, non-profit care of AI. Employees now side with Altman out of greed and view the board as denying them their mega payday. Microsoft dangling jobs for employees wanting to jump ship and make as much money possible. This whole thing seems pretty simple: greed (Altman, Microsoft, employees) vs the original non-profit mission (the board).

      Edit: spelling

      • @CoderKat@lemm.ee
        link
        fedilink
        English
        101 year ago

        That’s what I thought it was at first too. But regular employees aren’t usually all that interested in their company being profit driven. Especially AI researchers. Most of those that I know are extremely passionate about ethics in AI.

        But do they know things we don’t know? They certainly might. Or it might just be bandwagoning or the likes.

        • @los_chill@programming.dev
          link
          fedilink
          English
          91 year ago

          But regular employees aren’t usually all that interested in their company being profit driven. Especially AI researchers. Most of those that I know are extremely passionate about ethics in AI.

          I would have thought so too of the employees, but threatening a move to Microsoft kinda says the opposite. That or they are just all-in on Altman as a person.

    • @Ullallulloo@civilloquy.com
      link
      fedilink
      English
      341 year ago

      The only explanation I can come up with is that the workers and Altman both agreed in monetizing AI as much as possible. They’re worried that if the board doesn’t resign, the company will remain a non-profit more conservative in selling its products, so they won’t get their share of the money that could be made.

      • @Melt@lemm.ee
        link
        fedilink
        English
        411 year ago

        The tone of the blog post is so amateurish I feel like I’m reading a reddit post on r/Cryptocurrency

      • @I_Clean_Here@lemmy.world
        link
        fedilink
        English
        261 year ago

        Don’t get me wrong, this move from the board reeks of some grade A bullshit but this article is absolute crap. Is this supposed to be a serious journalism?

      • @conditional_soup@lemm.ee
        link
        fedilink
        English
        181 year ago

        Thanks for sharing. That is… Weird in ways I didn’t anticipate. “Weird cult of pseudointellectuals upending the biggest name in silicon valley” wasn’t on my bingo board.

        • FaceDeer
          link
          fedilink
          141 year ago

          IMO there are some good reasons to be concerned about AI, but those reasons are along the lines of “it’s going to be massively disruptive to the economy and we need to prepare for that to ensure it’s a net positive”, not “it’s going to take over our minds and turn us into paperclips.”

          • @diablexical@lemm.ee
            link
            fedilink
            English
            21 year ago

            The author did a poor job of explaining that. He’s referencing the thought experiment of a businessman instructing a super effective AI to make paperclips. Given a terse enough objective and an effective enough AI, one can imagine a scenario in which the businessman and the whole world in fact are turned into paperclips. This is obviously not the businessman’s goal, but it was the instruction he gave the AI. The implication of the thought experiment is that AI needs guardrails, perhaps even ethics, or else it can unintentionally result in a doomsday scenario.

      • @Bal@lemm.ee
        link
        fedilink
        English
        81 year ago

        I don’t know a lot about the background but this article feels super biased against one side.

      • roguetrick
        link
        fedilink
        31 year ago

        A duel between hucksters and the delusional makes sense. The delusional rely on the hucksters for funding whether they want to or not though. No heroes.

      • @Coasting0942@reddthat.com
        link
        fedilink
        English
        21 year ago

        Can somebody explain the following quote in the article for me please?

        Rationalists’ chronic inability to talk like regular humans may even explain the statement calling Altman a liar.

        • @vanquesse@lemmy.blahaj.zone
          link
          fedilink
          English
          21 year ago

          Imagine “roko’s basilisk”, but extended into an entire philosophy. It’s the idea that “we” need to anything and everything to create the inevitable ultimate super-ai, as fast as possible. Climate change, wars, exploitation, suffering? None of that matters compared to the benefits humanity stands to gain when the ultimate super-ai goes online

    • @Blackmist@feddit.uk
      link
      fedilink
      English
      101 year ago

      Yeah, the speed at which MS snapped him up makes me think of Zampella and West from Infinity Ward.

      • @Chocrates@lemmy.world
        link
        fedilink
        English
        51 year ago

        Microsoft Stock dropped 2% with the announcement, hiring him was just to stop the hemorrhaging while they figure out what to do.

        • @Blackmist@feddit.uk
          link
          fedilink
          English
          31 year ago

          Isn’t that more because MS own lots of OpenAI stock? But then 2% is neither here nor there anyway. More background noise than anything.

    • @morrowind@lemmy.ml
      link
      fedilink
      English
      41 year ago

      I don’t think msft convinced him with money, but rather opportunity. He clearly still wants to work with AI and 2nd best place for that after openAI is Microsoft

      • @SnipingNinja@slrpnk.net
        link
        fedilink
        English
        31 year ago

        Second best would be Google, but for him it’s Microsoft because he’s probably getting a sweetheart deal as being in control of his destiny (not really, but at least for a short while)

        • @morrowind@lemmy.ml
          link
          fedilink
          English
          21 year ago

          Microsoft has access to a lot of OpenAI’s code, weights etc. and he’s already been working with them. It would be much better for him than to join some other company he has no experience with.

          • @SnipingNinja@slrpnk.net
            link
            fedilink
            English
            51 year ago

            He’s not the guy who writes code, he’s a VC or management guy. You might say he has good ideas, as ChatGPT interface is attributed to him, but he didn’t make it.