The theory, which I probably misunderstand because I have a similar level of education to a macaque, states that because a simulated world would eventually develop to the point where it creates its own simulations, it’s then just a matter of probability that we are in a simulation. That is, if there’s one real world, and a zillion simulated ones, it’s more likely that we’re in a simulated world. That’s probably an oversimplification, but it’s the gist I got from listening to people talk about the theory.

But if the real world sets up a simulated world which more or less perfectly simulates itself, the processing required to create a mirror sim-within-a-sim would need at least twice that much power/resources, no? How could the infinitely recursive simulations even begin to be set up unless more and more hardware is constantly being added by the real meat people to its initial simulation? It would be like that cartoon (or was it a silent movie?) of a guy laying down train track struts while sitting on the cowcatcher of a moving train. Except in this case the train would be moving at close to the speed of light.

Doesn’t this fact alone disprove the entire hypothesis? If I set up a 1:1 simulation of our universe, then just sit back and watch, any attempts by my simulant people to create something that would exhaust all of my hardware would just… not work? Blue screen? Crash the system? Crunching the numbers of a 1:1 sim within a 1:1 sim would not be physically possible for a processor that can just about handle the first simulation. The simulation’s own simulated processors would still need to have their processing done by Meat World, you’re essentially just passing the CPU-buck backwards like it’s a rugby ball until it lands in the lap of the real world.

And this is just if the simulated people create ONE simulation. If 10 people in that one world decide to set up similar simulations simultaneously, the hardware for the entire sim reality would be toast overnight.

What am I not getting about this?

Cheers!

  • blahsay@lemmy.world
    link
    fedilink
    arrow-up
    20
    ·
    5 months ago

    It’s simple - you cheat. In computer games we only draw the things you are looking at, and we only give the appearance of simulating the whole thing but the ‘world’ or universe is actually very limited and you can’t visit most places. Sound familiar?

    • zbyte64@awful.systems
      link
      fedilink
      arrow-up
      5
      ·
      5 months ago

      I don’t think you can approximate Turing complete algorithms though. And then you end up with a situation where the simulation is making these Turing machines out of other simulated components, so it’s even more overhead then just giving the simulated agents direct CPU time.

    • JonEFive@midwest.social
      link
      fedilink
      arrow-up
      5
      ·
      5 months ago

      The fun thing about this is that we have evidence that this is how our reality works. The double slit experiment showed that particles change their behavior when observed. (Gross oversimplification and only under very specific circumstances but still extremely fascinating.)

      • blahsay@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        5 months ago

        The real problems would be x^m computational issues. A finite number of ai running around on a finite amount of space are linear problems. Basically, very possible