• ulterno@programming.dev
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    In case of Blender, I’d say, it could be user choice.
    But I like Z up right handed because it matches the convention used in high school physics.

    Rather, why don’t Y-up software change to Z-up, instead?
    Were they taught to use Y-up in Kinematics-3D? I doubt that.

      • ulterno@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        15 hours ago

        Yes.
        And that’s why I once previously came up with an assumption that a game engine having Y-up as default, probably started as a 2D game engine

        I remember getting downvoted for that

    • RightHandOfIkaros@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      Z-up is really only common among architectural or engineering type software. CAD and other types of software, for example.

      Y-up is common among basically all other types of software. DirectX API is Y-up, for example. Which means anything that interacts with DirectX that has Z-up needs to convert the axes first before doing its calculation (a literal nothing cost microoptimization, but could be big depending on the software and platform).

      It’s probably fine to leave as is in the long run, but its just annoying to me that I need to convert between the axes in Blender export settings, and would be more convenient if all the software I use used the same system, which is now Y-up. Blender is literally the only one. Honestly, I agree that it should be a user setting in Preferences, but I dont know if Blender is programmed to handle this or if it would need to be entirely rewritten.

      • ulterno@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 hours ago

        Well, considering that Kinematics 3D [1] uses Z-up and the main reason for Y-up being the 2D monitor having been the primary target for graphics output (where X and Y would have been taken and mapped first, requiring Z to be added when converting the paradigm to 3D) back when these things were started and us, slowly transitioning to having the same application being usable with both - AR/VR tech and monitors[2], we might as well all go with Z-up from the start.


        1. an academic subject, which should supposedly be your first introduction to a 3 dimensional coordinate system after the pre-introduction back in Kinematics 2D and pre-pre-introduction when learning the number line. i.e. If the academic curriculum was sensible ↩︎

        2. which would mean that in some cases, the user might be seeing some axes-gimbal, requiring a translation layer later anyway and in this case on the application developer’s side, which makes it a cognitive load ↩︎