TL;DR: The existing incentive structure makes it impossible to prevent the creation of large machine learning models like Yudkowsky and others want.
Also, keep in mind that the paperclip maximizer scenario is completely hypothetical.
TL;DR: The existing incentive structure makes it impossible to prevent the creation of large machine learning models like Yudkowsky and others want.
Also, keep in mind that the paperclip maximizer scenario is completely hypothetical.
Me neither. From what I can tell, the author isn’t either if you pay attention to the end of the article.
I believe the absolute worst case scenario is that we grow to rely on it so much that it ends up controlling us indirectly, leading to any number of weird consequences.