Researchers have created a test to assess gender and racial biases in machine learning algorithms. The paper was presented at the Neural Information Processing Systems (NIPS) conference in Barcelona, Spain, along with several others on this subject. Concerns have been raised on this subject since several incidents of such discrimination have emerged. In July, Google voice recognition software was found to recognize male voices much more consistently than female voices, by a significant margin. A crime prediction algorithm developed by the Los Angeles Police Department was found to reinforce racial bias, encouraging a continued focus on black neighborhoods instead of on other areas where data later showed crime was actually concentrated. One Google ad platform for jobs was found to direct ads for highly paid executive positions towards men.

One co-author of the paper, Mortiz Hardt, a senior research scientist at Google, said, “Decisions based on machine learning can be both incredibly useful and have a profound impact on our lives … Despite the need, a vetted methodology in machine learning for preventing this kind of discrimination based on sensitive attributes has been lacking.”

Another co-author, Nathan Srebro, who is a computer scientist at the Toyota Technological Institute at Chicago, said “We are trying to enforce that you will not have inappropriate bias in the statistical prediction.”

In machine learning programs, criteria for decision-making are learned over time by the algorithm itself, instead of being pre-determined by a human programmer. This means that these criteria are often unknown to even the programmers who wrote the software.

“Even if we do have access to the innards of the algorithm, they are getting so complicated it’s almost futile to get inside them. The whole point of machine learning is to build magical black boxes,” said Srebro.

The test designed by the scientists gets around this problem by simply analyzing the data that goes in to a program in relation to the decisions that come out. According to Srebro, “Our criteria does not look at the innards of the learning algorithm. It just looks at the predictions it makes.”

They are calling their method Equality of Opportunity in Supervised Learning. This approach aims to make sure decisions made by such algorithms do not reveal any information as to race or gender, other than what could already be taken from the initial data. One example is that if men were found to be twice as likely to default on a bank loan as women, and an algorithm calculated that the best approach was to reject all applications from men and accept all applications from women, this would be considered inappropriate discrimination, according to Srebro.

However, some have criticized the approach as a work-around for the need for transparency when these algorithms can have significant impact on the lives of individuals. According to Noel Sharkey, who is an emeritus professor of robotics and AI at the University of Sheffield:

“Machine learning is great if you’re using it to work out the best way to route an oil pipeline. Until we know more about how biases work in them, I’d be very concerned about them making predictions that affect people’s lives.”

Leave a Reply

Your email address will not be published.