TL;DR: The existing incentive structure makes it impossible to prevent the creation of large machine learning models like Yudkowsky and others want.

Also, keep in mind that the paperclip maximizer scenario is completely hypothetical.

  • SquishyPillow@burggit.moeOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I should just clarify, I don’t care about alignment whatsoever. I don’t really care if you disagree; it will only hurt you in the long run.