The Question:What if beauty pageant judges were machines? A robot would ideally lack a human’s often harmful social biases. Beauty.ai is an initiative by the Russia and Hong Kong-based Youth Laboratories and supported by Microsoft and Nvidia, ran a beauty contest with 600,000 entrants, who sent in selfies from around the world—India, China, all over Africa, and the US. They had a set of three algorithms judge them based on their face’s symmetry, their wrinkles, and how young or old they looked for their age. The algorithms did not evaluate skin color.
The results: Out of the 44 people that the algorithms judged to be the most “attractive,” all of the finalists were white except for six who were Asian. Only one finalist had visibly dark skin. All three algorithms used a style of machine learning called “deep learning”. aAlanguage processing algorithm was recently found to rate white names as more “pleasant” than black names, mirroring earlier psychology experiments on humans.
The Problem: The problem here is the lack of diversity of people and opinions in the databases used to train AI, which is created by humans. Also, the large majority (75 percent) of contest entrants were European and white. Seven percent were from India, and one percent were from the African continent. Camera film was originally designed to perform best with white skin in frame, for example, meaning that until the industry decided to correct the base issue, every camera demonstrated a racist bias even in the hands of ostensibly non-racist photographers.
Solution: Change the system, get some people of color trained in machine learning.