Tech

Prejudice isn’t the only problem with credit scores — and no, AI can’t help it.

Loading...

But in the the largest study ever done of real-world mortgage data, Economists Laura Blattner at Stanford University and Scott Nelson at the University of Chicago show that differences in mortgage approval between minority and majority groups are not only prejudicial, but in the fact that minority and low-income groups have less data in their credit histories.

This means that when this data is used to calculate a credit score and that credit score is used to make a forecast on the loan date, then that forecast will be less accurate. It is this lack of precision that leads to inequality, not just prejudice.

The implications are strange: more accurate algorithms will not solve the problem.

“It’s a really impressive result,” says Ashesh Rambachan, who studies machine learning and economics at Harvard University, but was not involved in the study. Prejudice and bad credit records have been hot issues for some time, but this is the first large-scale experiment that looks at the loan applications of millions of real people.

Credit scores squeeze a series of socio-economic data, such as employment history, financial records and buying habits, into a single number. In addition to deciding on loan applications, credit scores are now used to make many life-changing decisions, including decisions on insurance, hiring and housing.

Loading...

To understand why minority and majority groups were treated differently by mortgage lenders, Blattner and Nelson collected credit reports for 50 million anonymized U.S. consumers, and linked each of these consumers to their own. socio-economic details taken from a set of marketing data, their deeds of ownership and mortgage transactions, and data on the mortgage lenders who provided them with loans.

One of the reasons why this is the first study of its kind is that these data sets are proprietary and are not publicly available to researchers. “We went to a credit bureau and, in practice, we had to pay a lot of money to do this,” Blattner says.

Noisy data

They then experimented with various predictive algorithms to show that credit scores were not only prejudicial but “noisy,” a statistical term for data that cannot be used to make accurate predictions. Take a minority candidate with a credit score of 620. In a biased system, we can expect that this score will always outweigh that candidate’s risk and that a more accurate score would be 625, for example. In theory, this bias could be explained by a form of algorithmic affirmative action, such as lowering the threshold for approval for minority applications.


Source link

Loading...

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button