I have never dug into low level things like cpu architectures etc. and decided to give it a try when I learned about cpu.land.

I already was aware of the existence of user and kernel mode but while I was reading site it came to me that “I still can harm my system with userland programs so what does it mean to switch user mode for almost everything other than kernel and drivers?” also we still can do many things with syscalls, what is that stopping us(assuming we want to harm system of course) from damaging our system.

[edit1]: grammar mistakes

  • farcaster@beehaw.org
    link
    fedilink
    arrow-up
    28
    ·
    1 year ago

    Well hopefully you can’t harm your computer with userland programs. Windows is perhaps a bit messy at this, generally, but Unix-like systems have pretty good protections against non-superusers interfering with either the system itself, or other users on the system.

    Having drivers run in the kernel and applications run in userland also means unintentional application errors generally won’t crash your entire system. Which is pretty important…

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Windows 7 and later, have even better anti-non-superuser protections than Unix-like systems. It’s taken a while for Linux to add a capabilities permission system to limit superusers, something that’s been available on Windows all the time.

      • ricecake@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Er, selinux was released nearly a decade before Windows 7, and was integrated into mainline just a few years later, even before vista added UAC.

        Big difference between “not available” and “often not enabled”.

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Windows 95 already had an equivalent of selinux in the policy editor, “often not enabled”. UAC is the equivalent of sudo, previously “not available”.

          Windows 7 also had runtime driver and executable signature testing (“not available” on Linux), virtual filesystem views for executables (“not available” on Linux), overall system auditing (“often not enabled” on Linux), an outbound per-executable firewall (“not available” on Linux), extended ACLs for the filesystem (“often not enabled” and in part “not available” on Linux)… and so on.

          Now, Linux is great, it had a much more solid kernel model from the beginning, and being OpenSource allows having a purpose-built kernel for either security, flexibility, tinkerability, or whatever. But it’s still lacking several security features from Windows, which are useful in a generalistic system that allows end-users to run random software.

          Android had to fix those shortcomings by pushing most software into a JVM, while Flatpak is getting popular on Linux. Modern Windows does most of that transparently… at a hit to performance… and doesn’t let you opt-out, which angers tinkerers… but those are the drawbacks of security.

    • duncesplayed
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      dd if=/dev/zero of=/dev/sda is a userland program, which I would say causes harm.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        /dev/sda access requires superuser/root permissions from the kernel, which means asking the kernel to lift many of the protections.

        • abhibeckert@beehaw.org
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 year ago

          On some unix systems (MacOS for example) you can’t even do that with root.

          You’d need reboot into firmware, change some flags on the boot partition, and then reboot back into the regular operating system.

          To install a new version of the operating system on a Mac, it creates a new snapshot of your boot hard drive, updates the system there, then reboots instructing the firmware to reboot on the new snapshot. The firmware does it’s a few checks of it’s own as well, and if it fails to boot then it will reboot on the old snapshot (which is only removed after successfully booting on to the new one). That’s not only a better/more reliable way to upgrade the operating system, it’s also the only way it can be done because even the kernel doesn’t have write access to those files.

          The only drawback is you can’t use your computer while the firmware checks/boots the updated system. But Apple seems to be laying the foundations for a new process where your updated operating system will boot alongside the old version (with hypervisors) in the background, be fully tested/etc, and then it should be able to switch over to the other operating system pretty much instantly. It would likely even replace the windows of running software with a screenshot, then instruct the software to save it’s state and relaunch to restore functionality to the screenshot windows (they already do this if a Mac’s battery runs really low - closing everything cleanly before power cuts out, then restore everything once you charge the battery).

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            That’s interesting, I don’t have much contact with Apple’s ecosystem.

            Sounds similar to a setup that Linux allows, with the root filesystem on btrfs, making a snapshot of it and updating, then live switching kernels. But there is no firmware support to make the switch, so it relies on root having full access to everything.

            The hypervisors approach seem like what Windows is doing, where Windows itself gets booted in a Hyper-X VM, allowing WSL2 and every other VM to run at “native” speed (since “native” itself is a VM), and in theory should allow booting a parallel updated Windows, then just switching VMs.

            On Linux there is also a feature for live migrating VMs, which allows software to keep running while they’re being migrated with just a minimum pause, so they could use something like that.

        • duncesplayed
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Yes, which is literally what OP is asking about. They mention system calls, and are asking, if a userland program can do dangerous thing using system calls, why is there a divide between user and kernel. “Because the kernel can then check permissions of the system call” is a great answer, but “hopefully you can’t harm your computer with userland programs” is completely wrong and misguided.

      • farcaster@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Yeah, security is in layers and userland isn’t automatically “safe”, if that’s what you’re pointing out. So I did mention non-superusers. Separating the kernel from userland applications is also critically important to (try to) prevent non-superusers from accessing APIs and devices which only superusers (or those in particular groups) are able to reach.