TL;DR: The existing incentive structure makes it impossible to prevent the creation of large machine learning models like Yudkowsky and others want.

Also, keep in mind that the paperclip maximizer scenario is completely hypothetical.

  • Phossu@burggit.moe
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’m not convinced that AI will necessarily lead to the destruction of humanity. I’m not even convinced that our current methods of “AI” are a path to artificial general intelligence. This article makes some good points and it is likely that treaties that attempt to stop AI development will be futile, but I’m not so sure that all roads lead to death here. More likely our society and our world gets increasingly worse and we die due to other causes i.e. climate change that may be exacerbated by AI.

    • SquishyPillow@burggit.moeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Me neither. From what I can tell, the author isn’t either if you pay attention to the end of the article.

      I believe the absolute worst case scenario is that we grow to rely on it so much that it ends up controlling us indirectly, leading to any number of weird consequences.