computers do what we tell them to do and exactly what we tell them to do, no more and no less, they don’t intuit their way through any bad directions or misspellings to figure out what we actually intended for them to do, if we accidentally misspoke or whatever and told them to do something stupid they’ll happily do the stupid thing
so yeah the short answer is any bias, even unintentional, on the part of the programmer will be reflected in the machine’s output
It’s pretty easy. To overexaggerate, suppose I trained one of these models exclusively on photos of white people, and exclusively on racist comments. Do you think it would respond appropriately to the existence of black people?
Spoiler alert: No, that’s why we got Tay, the chatbot Nazi, and Google Photos identifying black people as monkeys
I suppose it depends, but how the f*** does a machine have bias?
computers do what we tell them to do and exactly what we tell them to do, no more and no less, they don’t intuit their way through any bad directions or misspellings to figure out what we actually intended for them to do, if we accidentally misspoke or whatever and told them to do something stupid they’ll happily do the stupid thing
so yeah the short answer is any bias, even unintentional, on the part of the programmer will be reflected in the machine’s output
It’s pretty easy. To overexaggerate, suppose I trained one of these models exclusively on photos of white people, and exclusively on racist comments. Do you think it would respond appropriately to the existence of black people?
Spoiler alert: No, that’s why we got Tay, the chatbot Nazi, and Google Photos identifying black people as monkeys
GenAI ™ is the Social Darwinism of computer science.
garbage in, garbage out