Always Providing You With Ongoing Information

Snapshot_143

Racist artificial intelligence can only be stopped if Silicon Valley giants share the secret databases used to train them, the creator of a viral selfie app has claimed. 

Trevor Paglen, is an artist and one of the pair behind an app that exposed racist and sexist flaws in a colossal database used to train AI. He has warned that these same terms could be present in systems developed by big technology companies.

The flaws could have spread to companies including Google, Microsoft, Facebook and Huawei if they used it as a “seed” database, he has claimed.

Paglen says that we can assume similar things are going on in the databases of Google and Facebook or whatever, but we can’t see that happening.

He also says there are often trade secrets, and  this is a huge problem for the field of machine learning in general, especially in applications that touch people’s everyday lives.

Paglan has called for “a lot more transparency” from companies on how machine learning systems are being used and how they’re classifying people to stop this from making biased decisions.

Mr Paglen’s app, created with the AI researcher Kate Crawford and called ImageNet Roulette, exposed that pictures of black and ethnic minority people generated race labels such as “negroid” or “black person”, while results from caucasian faces varied more widely, such as “researcher”, “scientist” or “singer”.

In other words, white people were more likely to be categorised as a specific profession or character type, whereas non-white people were more likely to be categorized by their race alone, that can have negative connotations.

The app, which was “trained” using a popular image recognition database called ImageNet, was described as “a peek into the politics of classifying humans in machine learning systems and the data they’re trained on”.

ImageNet, created by Stanford University scientists, has been credited with kickstarting the modern AI boom and has become a benchmark against which new image recognition systems are measured.

The team led by Stanford professor Fei-Fei Lin has committed to removing over 600,000 images of people from the database since the challenge went viral earlier this week.

Mr Paglen’s comments follow plans launched by the UK government to pilot diversity regulations for staff working on artificial intelligence to reduce the risk of sexist and racist computer programs.

 

Tag Cloud