Was this AI trained on an unbalanced data set? (Only black folks?) Or has it only been used to identify photos of black people? I have so many questions: some technical, some on media sensationalism

  • Yoruio@lemmy.ca
    link
    fedilink
    arrow-up
    61
    ·
    1 year ago

    Was this AI trained on an unbalanced dataset (only black folks?)

    It’s probably the opposite. the AI was likely trained on a dataset of mostly white people, and thus more easily able to distinguish between white people.

    It’s a problem in ML that has been seen before, especially for companies based in the US where it is just easier to find a large amount of white people as opposed to people of other skin colors.

    It’s really not dissimilar to how people work either, humans are generally more able to distinguish between two people who are races that they grew up with. You’ll make more mistakes when trying to identify people of races you aren’t as familiar with too.

    The problem is when the police use these tools as an authoritative matching algorithm.

    • LetterboxPancake@sh.itjust.works
      link
      fedilink
      Deutsch
      arrow-up
      11
      ·
      1 year ago

      It’s not only growing up with them. We’re just better identifying people/animals/things we’re familiar with. Horses all look the same if you’re not around them regularly. You can distinguish colours, but that’s it.

      Not comparing people to horses by the way…

        • some_guy@lemmy.sdf.org
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          I checked your name to see if you were a contrarian from another thread. You aren’t.

          Then, I thought about the name you chose. Did you mean to spell Dessert (the treat after a meal) or was that a misspelling? Then, I considered that regardless of the intent in spelling, your name appears to refer to a war (Desert Storm: USA vs Iraq in the 90s). Even if it’s playful (Desserts Storming me, yum!), I dunno. At this point, I don’t suspect we align in ideologies. I’ll stop analyzing here.

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      I thought they would have trained it on mugshots. Either way, it should never be used to make direct arrests. I feel like it’s best use would be something like an anonymous tip line that leads to investigation.

      • Yoruio@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Using mugshots to train AI without consent feels illegal. Plus, it wouldn’t even make a very good training set, as the AI would only be able to identify perfectly straight images shot in ideal lighting conditions.

    • gramathy@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Also makes me wonder if our defined digital color spaces being bad at representing darker shades contributes as well.