Was this AI trained on an unbalanced data set? (Only black folks?) Or has it only been used to identify photos of black people? I have so many questions: some technical, some on media sensationalism

  • Yoruio@lemmy.ca
    link
    fedilink
    arrow-up
    61
    ·
    1 year ago

    Was this AI trained on an unbalanced dataset (only black folks?)

    It’s probably the opposite. the AI was likely trained on a dataset of mostly white people, and thus more easily able to distinguish between white people.

    It’s a problem in ML that has been seen before, especially for companies based in the US where it is just easier to find a large amount of white people as opposed to people of other skin colors.

    It’s really not dissimilar to how people work either, humans are generally more able to distinguish between two people who are races that they grew up with. You’ll make more mistakes when trying to identify people of races you aren’t as familiar with too.

    The problem is when the police use these tools as an authoritative matching algorithm.

    • LetterboxPancake@sh.itjust.works
      link
      fedilink
      Deutsch
      arrow-up
      11
      ·
      1 year ago

      It’s not only growing up with them. We’re just better identifying people/animals/things we’re familiar with. Horses all look the same if you’re not around them regularly. You can distinguish colours, but that’s it.

      Not comparing people to horses by the way…

        • some_guy@lemmy.sdf.org
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          I checked your name to see if you were a contrarian from another thread. You aren’t.

          Then, I thought about the name you chose. Did you mean to spell Dessert (the treat after a meal) or was that a misspelling? Then, I considered that regardless of the intent in spelling, your name appears to refer to a war (Desert Storm: USA vs Iraq in the 90s). Even if it’s playful (Desserts Storming me, yum!), I dunno. At this point, I don’t suspect we align in ideologies. I’ll stop analyzing here.

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      I thought they would have trained it on mugshots. Either way, it should never be used to make direct arrests. I feel like it’s best use would be something like an anonymous tip line that leads to investigation.

      • Yoruio@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Using mugshots to train AI without consent feels illegal. Plus, it wouldn’t even make a very good training set, as the AI would only be able to identify perfectly straight images shot in ideal lighting conditions.

    • gramathy@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Also makes me wonder if our defined digital color spaces being bad at representing darker shades contributes as well.

  • DavidGarcia@feddit.nl
    link
    fedilink
    arrow-up
    32
    ·
    1 year ago

    Putting any other issues aside for a moment, I’m not saying they’re not true also. Cameras need light to make photos, the more light they get, the better the image quality. Just look at astronomy, we don’t find the dark astetoids/planets/stars first, we find the ones that are the brightest and we know more about them than about a planet with lower albedo/light intensity. So it is literally physically harder to collect information about anything black, that includes black people. If you have a person with a skin albedo of 0.2 vs one with 0.6, you get 3x less information in the same amount of time all things being equal.

    And also consider that cameras have a limited dyanmic range and white skin might often be much closer to most objects around us than black skin. So if the facial features of the black person might fall out of the dynamic range of the camera and be lost.

    The real issue with these AIs is that they aren’t well calibrated, meaning the output confidence should mirror how often predictions are correct. If you get a 0.3 prediction confidence, among 100 predictions 30 of them should be correct. Then any predictions lower than 90% or so should be illegal for the police to use, or something like that. Basically the model should tell you that it doesn’t have enough information and the police should appropriately act on that information.

    I mean really facial recognition should be illegal for the police to use, but that’s besides the point.

    • some_guy@lemmy.sdf.org
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      I don’t know that facial recognition should be illegal for cops to use (though I don’t want them using it, overall), but there should be guardrails in place that prevent them from using it as anything more than “let’s look into this person further.”

      Put differently, a report of a certain model car of a certain color can tip them off to investigate someone driving such a car. It isn’t a reason to arrest that person.

    • Manas@lemdro.id
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Exactly! I don’t think any programmer would intentionally go out of their way to make it so that only the people with dark skin tones are matched from the database. It has got something to do with how it is not easy to detect facial features on a darker skin tone. The image vectors will have noisy information per pixel and the pixel intensities will be similar in some patches of the image because of the darker skin tone. But that’s just my unbiased programmer’s way of thinking. Let’s hope the world is still beautiful !We are all humans afterall

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Yes, there are technical challenges when implementing an AI solution such as this one. From a leadership perspective; however, arrests cannot be made on AI predictions alone. They would be best used like an anonymous tip line that leads to further investigation, but not ever directly to an arrest.

  • mohKohn@kbin.social
    link
    fedilink
    arrow-up
    24
    ·
    1 year ago

    12 people. we’re talking about 12 people, so any conclusions are suspect. that being said, facial recognition struggling with black faces from insufficient data is an extremely common problem, so it’d be unsurprising

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      That’s exactly my idea on media sensationalism. It’s really not a large sample. Way more people have been arrested and imprisoned by the justice system without any AI involvement.

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Then the title would read:

      In every reported case where police mistakenly arrested someone, that person has been Black

      Yeah, that could be the case

  • DessertStorms@kbin.social
    link
    fedilink
    arrow-up
    22
    ·
    1 year ago

    It’s amazing how hard some people will work to deny that demonstrable biases influenced by the society we live in, exist in and massively impact science and technology, as if they are above such things, while literally demonstrating their own biases.

    • hh93@lemm.ee
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      I always wonder if the people that are so hard against systemic/structural racism are really thinking that they are being oppressed if someone tries to address that or if they are fully aware of the advantages they have just because they are born with the “right” skincolor in the right neighbourhoods and are against it for purely egoistic reasons because they don’t want to lose that advantage

      • DessertStorms@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Both and/or either?
        It’s a complex and deep topic that is also inextricably intertwined with capitalism and other oppressive systems, so giving an in depth answer would take more brain power than I currently have available lol (I have tried, but I keep going off on tangents and getting tangled up in explanations), but I think this quote is quite relevant, so I might leave it at that:

        “If you can convince the lowest white man he’s better than the best colored man, he won’t notice you’re picking his pocket. Hell, give him somebody to look down on, and he’ll empty his pockets for you.” -Lyndon B. Johnson

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      demonstrable biases influenced by the society we live in, exist in and massively impact science and technology,

      And then science and technology fortifies biases of society, it’s a feedback loop

  • nieceandtows@programming.dev
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    I don’t think this is some systematic racism. Rather, it’s the technology itself that’s lacking. I remember even those motion activated bathroom sinks had problem working well with black hands. I think they’re just not good enough at differentiating between darkness and black skin.

      • nieceandtows@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Just because it can be doesn’t mean that’s the first place we go to. Stupidity is more likely than maliciousness and all that.

        • Fish Id Wardrobe@mastodon.me.uk
          link
          fedilink
          arrow-up
          9
          ·
          1 year ago

          @nieceandtows ::gestures at mankind’s enormously long and complex history of systemic racism::

          People are going to read your post and assume that you are apologising for the police *because* they are the police. You know, one of those “they must have done something wrong or the police wouldn’t have arrested them” people.

          I’m sure you’re not one of those. Just saying how it looks.

          • nieceandtows@programming.dev
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Eh. I have a loved one with some undiagnosed mental health issues, and it’s a constant struggle because they always assume the worst of anyone and everyone. Watching/living with them, I’ve learned that it’s always better to assume good about people than assuming bad about them. Assuming bad things without proof only ever ruins your happiness and relationships. People can read my comments and understand what I’m saying like you do. If they don’t and assume I’m racist, it only proves my point.

              • nieceandtows@programming.dev
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Not talking about systemic racism in general. I know there’s a lot of that. I’m talking about systemic racism causing this particular issue. I’m saying because there have been cases of motion sensors not detecting black hands because of technical issues. I’m not apologizing for anyone, just pointing out the fact that it has happened before due to technical deficiencies.

                • Fish Id Wardrobe@mastodon.me.uk
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  1 year ago

                  @nieceandtows The fact that there have been issues with sensors (which is true) does not disprove systemic racism (which exists). That’s like saying that because I put vinegar in the dressing the lemon juice wasn’t sour. It doesn’t follow.

    • vrighter@discuss.tchncs.de
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      haha this is reminding me of an episode of Better off Ted, where they replaced all sensors with optical based ones that did not recognize black people. Their solution was to hire white guys to follow them around to open doors and turn on lights for them

    • cobra89@beehaw.org
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      1 year ago

      IMO, the fact that the models aren’t accurate with people of color but they’re putting the AI to use for them anyway is the systemic racism. If the AI were not good at identifying white people do we really think it would be in active use for arresting people?

      It’s not the fact that the technology is much much worse at identifying people of color that is the issue, it’s the fact that it’s being used anyway despite it.

      And if you say "oh, they’re just being stupid and didn’t realize it’s doing that " then it’s egregious that they didn’t even check for that.

      • nieceandtows@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        That part I can agree with. These issues should have been fixed before it was rolled out. The fact that they don’t care is very telling.

    • DessertStorms@kbin.social
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      it’s the technology itself that’s lacking.

      the technology is designed by people, people who didn’t consider those with dark skin and so designed a technology that is lacking.
      Lets not act as if technology just springs spontaneously in to being.

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      I think it is come systematic racism. AI didn’t arrest this person, police officers did. They did no further investigation before making the arrest because they didn’t have to: the person has black skin. Case closed.

  • EinfachUnersetzlich@lemm.ee
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    1 year ago

    Just a note that it appears this is USA only. No comment on differences around the world (if the technology is used elsewhere).

    • buwho@lemmy.ml
      link
      fedilink
      English
      arrow-up
      20
      ·
      1 year ago

      In the documentary Coded Bias, the police in England were utilizing new AI facial recognition technology to find criminals on the streets and yes they were almost all black. The documentary states that it is because the models were trained on mostly white people so it couldn’t differentiate black peoples features as well. Or something to that affect.

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I imagine that in China the headline would read:

      In every reported case where police mistakenly arrested someone using facial recognition, that person has been Chinese

  • nxfsi@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    1 year ago

    Reminder that Google still hasn’t actually fixed the issue in which their image recognition algorithm mislabels black people as gorillas

      • Bibliotectress@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        I watch it on Hulu, in case you have that. The whole show is a joke about corporate America and how companies will do anything for a profit. It’s so good. There are only 2 seasons.

      • Rentlar@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Parent commentor is being downvoted… but with a camera tuned for outdoor visibility I can see why it’s difficult to detect features on a person with darker skin…

        Similar how it’s so hard to take good pictures of this little guy:

        black cat in cardboard box

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Nothing more? A pregnant lady was arrested and put in jail. If this AI “can’t discern” then it shouldn’t be used to make arrests.