• ForgotAboutDre@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    8 months ago

    Skeumorphims is much harder to implement for varying screen sizes and resolutions. When windows 7 was first released the different screen resolutions and sizes was very limited.

    I remember Windows was difficult to use if you dual booted it on a retina MacBook. Because it couldn’t handle the high pixel density well, many applications couldn’t be scaled properly.

    Skeumorphism is also much less useful. People needed hints to help them understand what things did on their computer. Many people had limited interactions with computers, now almost everyone uses computers of some form daily (largely smartphones). The remaining skeumorphism can hinder them not help, they have no idea what a floppy disk is or why that would be a save symbol.

    • EldritchFeminity@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      16
      ·
      8 months ago

      Not related to your point, but I would argue that computer usage is actually down today compared to when 7 was released, as computer literacy is proving to be more of an issue for generations younger than Millenials entering the workforce compared to generations older than Millenials. And that’s because smartphones work differently from laptops and desktops. The UI and how you interact with your phone is fundamentally different enough to make the skills not interchangeable. I worked with kids in their first jobs for a number of years, and many didn’t have a computer in their house because they did everything from their phones.

    • Mossy Feathers (She/They)@pawb.social
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      8 months ago

      Skeumorphims is much harder to implement for varying screen sizes and resolutions. When windows 7 was first released the different screen resolutions and sizes was very limited.

      Skeumorphism can scale, it’d just require procedural generation of assets. Granted, that can be pretty hardware intensive, but you could bake the assets for common screen sizes and include the procedural versions for future resolutions. Then again, you’d probably get people complaining about how their computer is slow for a minute or two after plugging it into a new display (because it’s baking new UI assets for the monitor’s resolution).

      Hmmm…

      Skeumorphism is also much less useful. People needed hints to help them understand what things did on their computer. Many people had limited interactions with computers, now almost everyone uses computers of some form daily (largely smartphones). The remaining skeumorphism can hinder them not help, they have no idea what a floppy disk is or why that would be a save symbol.

      Yeah, but it’s kinda the designer’s role to find new ways of visually explaining things to people while updating existing design language to account for changes in technology, culture, etc. If a floppy disk is so outdated that it confuses people when it’s used to symbolize saving files, then they need to find a new icon to symbolize that.

      Also, to be clear, I’m not against buttons having text labels to go with them (e.g. if the button’s function is difficult to convey in an image), I honestly just want buttons to look like buttons. I’m tired of all the abstract, minimalist design. I want things to look “real” again.

      Edit: an example of where procedural asset generation is widely, commercially used is video games. Believe it or not, but most game textures nowadays are procedurally generated and then baked/rasterized before being imported into a game engine. Substance Designer and Painter (material creation and texture painting, respectively) both use procedural workflows which allows them to generate textures at practically any resolution without any additional effort. The only reason why you can’t tell a game to generate 8k textures is because they’re usually baked before being imported into the game engine.