• wantd2B1ofthestrokes@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    11 months ago

    I’m not a proponent of this mindset but this seems like an obvious mischaracterization of the argument

    My biggest issues is that it seems to exist only in direct response to “doomers” as they love to say. And are maybe right to criticize, but having the whole thing just being a counter extreme doesn’t work either. And there’s lot of hand waving about technology and history and markets correcting themselves.

    But I’ve never gotten the impression that it’s just a cynical “I don’t care if AI fucks everyone as long as I make money.”

    • Eldritch@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      11 months ago

      It’s 100% not a mischaracterization. Most of the ghouls involved with these AI companies to a one are effective altruists. Who would gladly see millions suffer and die. If they thought it would mean that they and the people they chose would get to go on and colonize space and other worlds etc. They would totally rip this planet, a new hole and shit all over everything just to get them themselves ahead.

      They really really don’t care. Altman, musk, Google, Microsoft. They literally couldn’t care less. Though you’re more than welcome to try to prove otherwise. But I don’t condone masochism.

      • AnneBonny@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 months ago

        Most of the ghouls involved with these AI companies to a one are effective altruists. Who would gladly see millions suffer and die. If they thought it would mean that they and the people they chose would get to go on and colonize space and other worlds etc.


        I don’t understand what you’re saying here given that wikipedia describes effective altruism as:

        Effective altruism (often abbreviated EA) is a 21st-century philosophical and social movement that advocates “using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis”.[1][2] People who pursue the goals of effective altruism, sometimes called effective altruists,[3] may choose careers based on the amount of good that they expect the career to achieve or donate to charities based on the goal of maximising positive impact. They may work on the prioritization of scientific projects, entrepreneurial ventures, and policy initiatives estimated to save the most lives or reduce the most suffering.[4]: 179–195

        Effective altruists aim to emphasize impartiality and the global equal consideration of interests when choosing beneficiaries. Popular cause priorities within effective altruism include global health and development, social inequality, animal welfare, and risks to the survival of humanity over the long-term future.

        https://en.wikipedia.org/wiki/Effective_altruism

        • wantd2B1ofthestrokes@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          11 months ago

          Effective altruism is something that sounds good in principle, and I still think is good in general, though can kind of run out of control.

          Sam Bankman Fried was someone who at least claimed to follow this philosophy. The issue being that you can talk yourself into doing bad things (fraud) in the name or earning money that you would then donate much of.

          And more generally get into doing “long term” or “big picture” good while also doing a lot of harm. But hey the ends justify the means.

          Again, I think the principle of being a lot more calculated in how we do philanthropy is a huge good thing. But the EA movement has had some missteps and probably needs to be reigned in a bit.

          Funnily enough Wiki quotes Altman as one of the critics.

        • wikibot@lemmy.worldB
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          Here’s the summary for the wikipedia article you mentioned in your comment:

          Effective altruism (often abbreviated EA) is a 21st-century philosophical and social movement that advocates "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis". People who pursue the goals of effective altruism, sometimes called effective altruists, may choose careers based on the amount of good that they expect the career to achieve or donate to charities based on the goal of maximising positive impact. They may work on the prioritization of scientific projects, entrepreneurial ventures, and policy initiatives estimated to save the most lives or reduce the most suffering.: 179195 Effective altruists aim to emphasize impartiality and the global equal consideration of interests when choosing beneficiaries. Popular cause priorities within effective altruism include global health and development, social inequality, animal welfare, and risks to the survival of humanity over the long-term future. The movement developed during the 2000s, and the name effective altruism was coined in 2011. Philosophers influential to the movement include Peter Singer, Toby Ord, and William MacAskill. What began as a set of evaluation techniques advocated by a diffuse coalition evolved into an identity. With approximately 7,000 people active in the effective altruism community and strong ties to the elite schools in the United States and Britain, effective altruism has become associated with Silicon Valley and the technology industry, forming a tight subculture.The movement received mainstream attention and criticism with the bankruptcy of the cryptocurrency exchange FTX as founder Sam Bankman-Fried was a major funder of effective altruism causes prior to late 2022. Within the Bay Area, it received criticism for having a culture that has been described as toxic and sexually exploitative towards women, which led to conversations inside the community about how to create an environment that can better prevent and fight sexual misconduct.

          article | about

      • wantd2B1ofthestrokes@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 months ago

        I don’t think it’s necessarily true that if we listen to “doomers” we get sensible policy. And it’s probably more likely we get regulatory capture.

        But there does exist a sensible middle ground.

        I actually think they are correct to bring up the potential upside as something we should consider more in the moral calculus. But the of course it’s taken to a silly extreme.