TL;DR: The existing incentive structure makes it impossible to prevent the creation of large machine learning models like Yudkowsky and others want.
Also, keep in mind that the paperclip maximizer scenario is completely hypothetical.
TL;DR: The existing incentive structure makes it impossible to prevent the creation of large machine learning models like Yudkowsky and others want.
Also, keep in mind that the paperclip maximizer scenario is completely hypothetical.
I should just clarify, I don’t care about alignment whatsoever. I don’t really care if you disagree; it will only hurt you in the long run.