Artificial Intelligence
Artificial Intelligence is commonly used today; from using Siri’s voice detection to Tesla’s self-driving ability. The development of intelligence machines that are created to think and work like humans is ever-evolving, but not without apparent errors. These errors include bias, racism, and sexism as a few examples. These ideas will be explored in more detail in this discussion post.
Joy Buolamwini was moved to write an article on her experience with face recognition when she found that there was a bias in the software. She is a dark-skinned woman and found that some face recognition softwares would not recognize her face unless she put on a white mask. This sparked her interest to look into the algorithms used to create face recognition softwares which she found were predominantly trained with and created best for light-skinned males (Boulamwini, 2019).
Not very long ago, Amazon created a version of Artificial Intelligence to allow people to submit their resume’s to be hired on to Amazon. Amazon’s AI algorithm was created to separate the most suitable resumes and the incompatible resumes. This algorithm ran into a large issue when they found that the people who were selected all had very similar characteristics in the fact that almost none were women candidates because the version of AI Amazon had created and trained didn’t include many women characteristics.
These two examples just explored support the claim that AI system advantages are not worth the biases if uncorrected. If the face recognition AI system was not corrected, it would prove to show that the software creators are biased, racist, or sexist. This would create negative publicity for the software creator and would be proven when they see a sudden decrease in buyers. Amazon’s error would widely be spread through technology and publicity. This would most likely cause a significant decrease in customers and partnerships. Fairness can be built into AI systems by training the algorithms to pick up various types of information about the people they are trying to include. Such as face recognition algorithms trained and balanced with dark-skinned men and women, Asian men and women, and white men and women faces.