• 1 Post
  • 1.34K Comments
Joined 2 years ago
cake
Cake day: April 6th, 2024

help-circle






  • I want to take this argument of efficiency in a different direction. First, two key observations: the system doing the simulation will never be as efficient as the system being modeled. Second a conscious system is aware of it’s own efficiency. This means even if you simulate a whole human body to create consciousness it will not have the same quality. It will either be aware of all the extra resources required to create “self” or fed a simulation of self that hides it’s own nature and thus cannot be self aware.












  • Eliezer, given the immense capacity of the human mind for self-delusion, it is entirely possible for someone to genuinely believe they’re being 100% altruistic even when it’s not the case. Since you know this, how then can you be so sure that you’re being entirely altruistic?

    Because I didn’t wake up one morning and decide “Gee, I’m entirely altruistic”, or follow any of the other patterns that are the straightforward and knowable paths into delusive self-overestimation, nor do I currently exhibit any of the straightforward external signs which are the distinguishing marks of such a pattern. I know a lot about the way that the human mind tends to overestimate its own altruism.

    Fun to unpack this here. First is the argument that we should be dismissive of any professed act of altruism unless someone is perfectly knowable. There is an interesting point here completely missed: even if the person knows themselves well enough to make the claim, others cannot possibly know another well enough to make the claim of another. Instead what we get is “trust me bro” because being contrarian is evidence of being on the correct path 🙄. We went from “we can’t possibly know another well enough to say they are altruist” to “I know when people are not altruist because they are predictable, but I am unpredictable therefore I am altruist”. I think this touches on the manipulation present in the community: you are either being manipulated and therefore cannot be an altruist because your motives are not your own (are you even selfish at this point?), OR you are contrarian enough to show you are in control of your own motives (nevermind we still can’t say whether your motives are altruistic). This is a very surface level read, I can’t bring myself to read all that slop. Parts are so redundant it feels like it was written by AI.