BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Can Algorithms And Big Data Appraise Character?

Following
This article is more than 8 years old.

“Algorithms aren’t subjective,” said Jure Lescovic, a computer science profession from Stanford quoted by Quentin Hardy of the New York Times in “Using Algorithms to Determine Character”. “Bias comes from people.”

Bias comes from people, yes. As for algorithms – mathematically-based rules used to make predictions and decisions in business applications - is it true that they aren’t subjective? Hardy himself was quick to point out that “Algorithms do not fall from the sky.”

It’s an intriguing suggestion that computing algorithms might objectively evaluate human character. The suggestion brings to mind a line from Reverend Martin Luther King’s famous “I Have a Dream” speech: “I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin but by the content of their character.”

Can algorithms lift us above the biases that influence the way we judge one another?

Hardy mentions Upstart, a lender, and ZestFinance, which is both a lender and provider of underwriting models for other lenders, as examples of the new, Big Data driven, approach to evaluating credit worthiness. Upstart uses its own algorithms in combination with Fair Isaac’s FICO credit scores, while ZestFinance offers alternatives to FICO.

FICO scores are based on five types of information: payment history, amount owed, length of credit history, types of credit and new credit (credit that you have taken on recently). Upstart, Hardy explains, uses factors such as SAT scores and what college you attended as part of its algorithm, and the Upstart website indicates that it also asks for information such as employment and salary. ZestFinance looks for signals such as dropping a prepaid cell phone number or inconsistencies in information provided by different data sources.

It all seems quite mathematical and consistent. The question is: can algorithms like these eliminate bias? Hardy’s piece discussed more than just lending, with examples in criminal justice and human resources as well. In all these realms, it would be great to objectively evaluate individuals as individuals. And in all of them, it would be a shame to latch onto processes that look unbiased on the surface, but really aren’t.

A deeper look at the lending examples hints at cracks beneath the surface. SAT scores favor those who come from affluent families, and choice of college may have more to do with the demographics of your parents than your individual character or personality. Dropping a phone number might be a tactic to dodge creditors, but then again it might just be necessary to avoid an abusive ex-boyfriend.

Earlier this month, another New York Times piece, “When Algorithms Discriminate” by Claire Cain Miller, cited studies by researchers from several universities, each indicating that computer algorithms can reflect the biases, conscious or not, of their creators.

Apart from the powerful computers and large quantities of data involved, this really isn’t news. Seemingly objective measures have been known to hide biases for decades, at least. The SAT exam was designed as a social engineering tool. The problem of race bias in IQ tests is so well-known that it was once the central theme of an episode of the TV series “Good Times.” (See a clip from “The IQ Test.”) That was over 40 years ago.

Can algorithms be better than the people who make them? Maybe. But not without a lot of conscious effort to uncover and eliminate hidden bias.

Follow me on LinkedInCheck out my website