Yes, an algorithm can be biased, racist and discriminatory

Two days ago, an interview with Cynthia Dwork, a computer scientist at Microsoft Research, explored how algorithms can end up making discriminatory decisions. The common perception that algorithms can process data and reach unbiased decisions, is wrong she claims. Biases can be introduced into algorithms by either their creators, who by writing the code to match their opinions, or by machine learning, processing historical data to understand how decisions were made in order to make future decisions.

As it is, algorithms are making more and more decisions that greatly impact people’s lives and livelihood. A few examples include their potential ability to repay a loan, thus the cost and availability of that loan, whether a recruiter will see their resume among thousands submitted for consideration, whether they will even see the ad for the job, and down to what “recommended items” they will see when shopping. In an earlier article, the Times quoted three recent studies that found that “Google’s online advertising system showed an ad for high-income jobs to men much more often than it showed the ad to women… Research from Harvard University found that ads for arrest records were significantly more likely to show up on searches for distinctively black names or a historically black fraternity, [and] the Federal Trade Commission said advertisers are able to target people who live in low-income neighborhoods with high-interest loans.” Last week Facebook secured a patent on determining creditworthiness by analyzing a user’s friends and connections, another algorithm of many that decides who gets loans and what they pay for them. Just today, when researching this post, I found that Telefónica will now select what startups to invest in based on, you guessed it, the recommendation of an algorithm.

So, how to solve the issue? Step one, as always, awareness. In a post two weeks ago I wrote about how a new startup, Upstart, uses different criteria in its creditworthiness algorithm than the commonly used FICO score by Fair Isaac. Its founder defined these criteria as “character.” What is character? Whatever their team decided, it’s mostly likely biased to approve a loan to someone who is very similar to them: young, employed, and freshly out of college. While that profile doesn’t necessarily achieve a high credit rating with the FICO models, it doesn’t mean it is fairer. It just means they have exchanged an old bias for a new one. Be aware of the trade-off. 

Even Justice isn't blind. Source: J.H. Janßen, Wikimedia

Even Justice isn’t blind
Source: J.H. Janßen, Wikimedia

Ms Dwork explains not only how algorithms can discriminate, but how to attempt to balance that bias with (surprisingly) more data. “Fairness Through Awareness” uses more personal information in its analysis.  “Fairness means that similar people are treated similarly. A true understanding of who should be considered similar for a particular classification task requires knowledge of sensitive attributes, and removing those attributes from consideration can introduce unfairness and harm utility.” It requires that the body using the algorithm to reach a decision use data that will, in the end, eliminate the bias. It also requires a lot more work and research.

Now that there is at least more awareness of possible bias in an algorithm, we can, at least, can stop pretending that they are fairer than humans. They’re not. They are merely a reflection of their data and their creators.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s