The fairness of new loan startups

There is a myth that if a business collects lots of user data, creates a smart algorithm that can extract meaningful information from that data, and then monetizes that information, it can’t go wrong. Smarter, targeted advertising is the most popular way to monetize reams of user data (see: Google, Facebook, etc) but two startups, Upstart and Zest, have found another: loans.

Basing loans on university attended, SAT scores and GPA.

Basing loans on university attended, SAT scores and GPA.

Upstart offers loans to people who best match their criteria, which they claim is determined on finding people with “good character.” They analyze data like SAT scores, colleges attended, majors and their grade-point averages. Says Paul Gu, Upstart’s co-founder and head of product, “if you take two people with the same job and circumstances, like whether they have kids, five years later the one who had the higher GPA is more is more likely to pay a debt.”

Judging people’s ability to repay a debt by their history is not a new concept. Fair Isaac has been doing just that for years and claim that over 90% of lenders in the US use their score. They analyze data they call credit histories and generate a score, between 300 and 850. A lender can use that score to decide whether to offer a loan, at what interest rate, and under what terms. Upstart is basing their score on a completely different data set than Fair Isaac, and have already made $150 million in loans based on their analysis of that data.

Another company, Zest, offers loans based on different data points than both Upstart and Fair Isaac. One interesting signal they use is “whether someone has ever given up a prepaid wireless phone number. Where housing is often uncertain, those numbers are a more reliable way to find you than addresses; giving one up may indicate you are willing (or have been forced) to disappear from family or potential employers. That is a bad sign.” Both of these companies are gathering different data points, analyzing them to find signals that will enable them to predict repayment. And both are putting their money where their algorithm is by offering the loans.

As someone who has often misunderstood what needs to be done to improve one’s credit score and has, on more than one occasion, done the exact opposite of what’s recommended, yet has never defaulted on a loan or even made a late payment, I was happy to see someone trying to disrupt the current model. The determination of the FICO score is shrouded in mystery and seems rife with absurdities. Even Ben Bernanke, former chairman of the Federal Reserve couldn’t refinance his mortgage after leaving his post. “The problem probably boils down to this: Anybody who knows how the world works may know that Ben Bernanke has vast earning potential, and that he is as safe a credit risk as one could imagine. But he just changed jobs a few months ago. And in the thoroughly automated world of mortgage finance, having recently changed jobs makes you a steeper credit risk.” Straying from the existing data set used by Fair Isaac would probably help more people who have a low credit risk but, for whatever reason, less than perfect credit scores get loans.

Jure Leskovec, a professor of computer science at Stanford, thinks that algorithms are better than people in avoiding biases. “Algorithms aren’t subjective,” he said. “Bias comes from people.” There is, of course, some truth in what he says but (and the NY Times called him on this as well) algorithms don’t fall from trees, they don’t write themselves. Someone, a human, has to believe at some point that a given set of criteria is the right set. Sure, data is used to back up the algorithm, just as data is entered into it to create a score, but it’s not ritually devoid of human intervention.

Algorithms are not impartial and unbiased. For example, basing credit risk on the university attended assumes the admissions committee at a certain school is operating without a bias. Most likely, it isn’t. Not all the candidates had the same opportunities to impress the committee, not all went to school where AP classes were available, not all had the ability to play the violin from an early age, etc. Also, I’m sure that both of these companies don’t use skin color as a criteria but do they exclude addresses in certain neighborhoods or zip codes? Maybe they don’t score women lower than men, but are people with year-long employment gaps (think maternity leave) scored lower? These are just some assumptions that could entrench bias. Upstart says it is judging character, but who is to say if their definition of character, based on their founders’ life experiences, makes sense for people growing up in different environments?

Lest we forget where we started from, it’s not like the current system is perfect. After all, it prizes length of time in the current job, not taking into account that it’s very common for tech workers to switch jobs frequently and they are great at returning loans. So having a new set of loan criteria in town is good for consumers. It’s also encouraging that Upstart and Zest put their own money where their algorithm is and actually offer loans based on their algorithms. It’s gutsy to say we trust our algorithm so much, we will bet the house on it. Just remember, algorithms don’t build themselves. humans plan them, humans design them, and humans build them.


One thought on “The fairness of new loan startups

  1. Pingback: Yes, an algorithm can be biased, racist and discriminatory | What it all boils down to

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s