• NephewAlphaBravo [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      17 days ago

      computers do what we tell them to do and exactly what we tell them to do, no more and no less, they don’t intuit their way through any bad directions or misspellings to figure out what we actually intended for them to do, if we accidentally misspoke or whatever and told them to do something stupid they’ll happily do the stupid thing

      so yeah the short answer is any bias, even unintentional, on the part of the programmer will be reflected in the machine’s output

    • ThermonuclearEgg [she/her, they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      17 days ago

      It’s pretty easy. To overexaggerate, suppose I trained one of these models exclusively on photos of white people, and exclusively on racist comments. Do you think it would respond appropriately to the existence of black people?

      Spoiler alert: No, that’s why we got Tay, the chatbot Nazi, and Google Photos identifying black people as monkeys