• BynarsAreOk [none/use name]@hexbear.net
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      1 year ago

      Yeah I was going to comment about this too, but yes IMO because it almost surely it is otherwise that person is completely lacking in reading comprehension i.e how on earth is “making [optimal] decisions” fundamentally different from “mambo jambo about threats and opportunities”. Also the optimal decisions just mentioned previously is literally the opposite of the definition of “one size fits all”. Also it goes without saying it goes with that general air of superiority, the stereotypical confidently wrong response lol.

      But maybe I’m wrong and it is in fact very common to make these mistakes, heck I’m not a native English speaker maybe I’m missing something here lol.

  • MerryChristmas [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    27
    ·
    1 year ago

    I got GPT4 to write some angry comments about this and it is sort of uncanny:

    Leadership is far more than data points and algorithms! It encompasses intuition, human connection, ethical decision-making, and countless intangibles that a machine could never grasp.

    Organizations thrive on human connections, shared values, and mutual aspirations. The idea that an algorithm could replace the heartbeat of a company is baffling.

    While AI can augment and support decision-making, the visionary leadership and human touch that CEOs provide are irreplaceable. Let’s not lose sight of what truly matters.

    This is the human touch that CEOs bring to the table, folks.

    • CarbonScored [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      23
      ·
      1 year ago

      To be honest “make green number go up” is probably the best task you could find for an AI to replace humans with. All the actual work and whatever takes real multi-variable thought, prediction, skill and consideration.

      • ChairmanSpongebob [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        exactly. for all the reductionism behind the idea that our leadership (politicians, CEOs etc) has no real agency because they are compelled to act in a way that furthers capitalism and profit inevitably, it’s only rationally consistent to replace them with machines. Someone ought to tell them that

  • Owl [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 year ago

    LMAO dude, that’s not what CEOs do, that’s like a mid-level director of engineering. The CEO’s job is to prevent the rest of the executives from looting the company before the shareholders can do it.

  • conditional_soup@lemm.ee
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    Noooooo, not like that! Automation is only for people who didn’t go to Harvard!

    The funny thing is, the only barrier here is context size. Right now, LLMs have laughably bad context size (or attention spans, in human terms, it’s basically how much information a Brian or model can keep active at any point in time) compared to humans, but that’s going to change. It’s not difficult to foresee a near future of LLMs with very, very, superhumanly large context sizes that could make human leadership seem ridiculously incompetent in comparison. Here’s the thing, pyramid-like organizational structures are extremely common because we necessarily have layers of abstraction; the head of the organization can’t do their job effectively if they’re worried about whether Bob the Welder is going to make it in on time or if that invoice has got paid yet; likewise, Bob the Welder can’t do his job if he’s getting pulled off work to go sit in marketing meetings all day. There’s only so much attention any one person can give in a day. The biggest problem is that information gets lost between these layers of abstraction, values don’t necessarily remain consistent, and policies and practices aren’t uniformly applicable, which can make it difficult for customers and even employees to navigate the normal processes of an organization, let alone the abnormal ones.

    As LLM context sizes reach superhuman levels, it’s conceivable that they could end up flattening organizational structures by being able to be both Bob’s supervisor and the CEO (or at least the CEO’s assistant), and being able to keep all of the organization’s context, down to the individual employee and customer needs, in mind at all times when making decisions. A government or corporation run by a properly aligned super-context AI could possibly be the closest thing we’re going to get to utopian leadership, and would likely be both more ethical and more effective than human leadership.

    • Nakoichi [they/them]@hexbear.netM
      link
      fedilink
      English
      arrow-up
      40
      ·
      edit-2
      1 year ago

      how much information a Brian or model can keep active

      Yeah fuck Brian.

      would likely be both more ethical and more effective than human leadership.

      Here’s where you are wrong and I have something you should listen to to understand why.

      https://soundcloud.com/thismachinekillspod/281-the-smoking-gun-of-techno-capitalism-ft-meredith-whittaker

      Here’s the specific article on why this is a utopian pipedream and why the reality under capitalism is much different and much scarier.

      https://logicmag.io/supa-dupa-skies/origin-stories-plantations-computers-and-industrial-control/

      • conditional_soup@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Good response, and thanks for bringing receipts. I’d love to read this a little later. Imo, though, large language models and generative AI in particular represent the capacity to make the means of production free and open source. True, freely available models that you could run on a gaming computer don’t hold water against ChatGPT yet, but I do suspect that this will change as the emphasis in AI research pivots towards making models more efficient. It’s also true that if a general AI is developed, it’s not going to be FOSS, though that’s honestly not the worst idea.

        With respect to your article on Babbage, I’d like to point out that much of the leadership in AI right now has been leading with the idea that any AI must follow the 3 Hs: Honest, Harmless, and Helpful. I think it’s more than just hype, IMO, because they’re currently burning a lot of cash hiring teams whose whole job it is to make sure that we get alignment (that is, constraining it with ethical values rather than allowing it to become a paperclip maximizer) of a potential super-intelligence correct. To be quiet frank, there’s a lot of MBAs out there who could stand to pick up those 3Hs.

        • combat_brandonism [they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          Imo, though, large language models and generative AI in particular represent the capacity to make the means of production free and open source.

          I remember left-sympathetic cryptobros saying the same thing about cryptocurrencies for the last decade.

          • conditional_soup@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I really never saw the value proposition with crypto, besides it being digital cash.

            A key difference is that generative AI actually can and already does produce value as a means of production. Tons of folks use chatGPT to save hours on their workflows; I’m a programmer and it’s probably saved me days of work, and I’m far from an edge case. Imo, the most telling thing is that a lot of the major AI companies are begging Congress to pull the ladder up behind them so that you can only develop AI if your market cap is at least this high; I think some of them are worried that decentralized, FOSS AIs will seriously erode their value propositions, and I think that their suspicions are correct.

    • SoyViking [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      The problems facing the world today does not come from leaders having too short attention spans or inadequate access to information. The problems comes from these rulers representing bourgeois rather than proletarian interests. No amount of bazinga is going to overcome class conflict and make the dictatorship of the bourgeoisie make decisions that benefit the masses.

      • conditional_soup@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        It’s possible that if giant-context models are freely available, flat-structured organizations run by AI could outcompete less agile pyramid-structured organizations. It is possible we could see the bourgeoisie hoisted by their own petard.

  • ChaosMaterialist [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    1 year ago

    I should write a piece how Neoliberalism is already carving up the CEO (and other) leadership positions. Hedge funds and other capital vultures constantly shuffle the corporate suite to suite their interests, so there is absolutely a place for computers as labor-saving devices for managing a portfolio of companies by these huge capital conglomerates. Cyberpunk was only wrong about the aesthetics.

    EDIT: My real hot take should be that CEOs are undergoing Proletarianization. The masters of Capital reveal themselves as its greatest slaves.

    • Mardoniush [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 year ago

      I’ve noticed this too. There are already C-level agencies where a hedge fund can dial up a specific board for their purpose, from fucking over the founders of a seed start-up, to pivoting from a pro-consumer growth model to profit maximisation, to “Strip the copper wiring before the smallholders notice”.

      See also the heads of banks and financial institutions going from among the richest captains of industry to mere money butlers with values 2 orders of magnitude below their clients.

      There’s no longer a spectrum of bourgoise, there’s your local used car salesman or medium business owner with 30 employees. And then there’s the mega rich. Everyone in between is now Labour Aristocracy and eventually, they’re gonna realise that.

  • LLMs can already replace those shitty organization wide emails that go out telling us all that we matter, justifying raises below the rate of inflation, and pretending like screwing the client is “actually” delivering added value to the client.

    shareholders should be clammoring for LLMs to replace CEOs, because LLMs aren’t going to jack up executive payouts right before the take on a bunch of debt and file for bankruptcy.

    • pillow [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      bit idea- marxist who hates the term pmc and defends megacorp ceos against their shareholders because he doesn’t want to divide the working class

  • SoyViking [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    1 year ago

    Can a chatbot golf? Can it get drunk at network meetings? Can it make up dumbass reactionary ideas and pay people to replicate them?

    No, AI will never be able to replace CEOs.

  • UmbraVivi [he/him, she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    Actually computers are extremely qualified to be CEO because the only trait you need is being willing to fuck over as many people as possible to maximize profits for your shareholders. The less empathy, the better. Shareholders want absolute psychos who will fire 50% of the workforce with no hesitation to make line go up by 2%.