TL;DR: The existing incentive structure makes it impossible to prevent the creation of large machine learning models like Yudkowsky and others want.

Also, keep in mind that the paperclip maximizer scenario is completely hypothetical.

  • FeepingCreature@burggit.moe
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    The AI race is entirely perpetuated by people who wish really hard they were helpless victims of the AI race so they could be excused for continuously perpetuating it in the pursuit of cool demos. Unfortunately, it just isn’t the case. OpenAI to their credit seem to have realized this, hence them not working on GPT-5 yet. - You can see the mask come off on this in Nadella’s “we made them dance” address, where it’s very clear that AI risk simply is not at all a salient danger to them. Everything else is just whining. They should just come out and say “We could stop any time we want, we just don’t want to.” Then at least they’d be honest.

    Meta, to their begrudging credit, is pretty close to this. They’re making things worse and worse for safety work, but they’re not doing it out of some “well we had to or bla bla” victim mindset.

    Everyone signs the letter, nobody builds the AI, then we find out in ten years that the CIA was building it this entire time with NSA assistance, it gets loose, and everyone gets predator droned before the paperclip maximizer machines ever get going.

    You know what happened in scenario 5? We got ten years for alignment. Still pretty impossible, but hey, maybe the horse will sing.

    Now, there are plausible takes for going as fast as possible - I am partial to “we should make unaligned AI now because it’ll never be this weak again, and maybe with a defeatable ASI we can get a proper Butlerian Jihad going” - but this ain’t it chief.

    • SquishyPillow@burggit.moeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I should just clarify, I don’t care about alignment whatsoever. I don’t really care if you disagree; it will only hurt you in the long run.

  • Phossu@burggit.moe
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’m not convinced that AI will necessarily lead to the destruction of humanity. I’m not even convinced that our current methods of “AI” are a path to artificial general intelligence. This article makes some good points and it is likely that treaties that attempt to stop AI development will be futile, but I’m not so sure that all roads lead to death here. More likely our society and our world gets increasingly worse and we die due to other causes i.e. climate change that may be exacerbated by AI.

    • SquishyPillow@burggit.moeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Me neither. From what I can tell, the author isn’t either if you pay attention to the end of the article.

      I believe the absolute worst case scenario is that we grow to rely on it so much that it ends up controlling us indirectly, leading to any number of weird consequences.