In looking at this, the recent debut of self-driving cars could transform a stressful commute into an opportunity to tackle emails and reading lists while making suburban long-distance travel great again. Americans are poised to gain more than 100 hours per year in free time by relinquishing the wheel to smart cars. The downside, though, lies in inevitable vulnerabilities like security threats, job loss, and environmental impact. Is the reward of AI worth the risk?
Artificial Intelligence is broadly defined as a computer’s ability to perform tasks normally requiring human intelligence. Ever since Alan Turing developed a test to determine a machine’s ability to exhibit intelligent behavior in 1950, roboticists and scientists have sought to pass it. In 2014, a computer program called “Eugene Goostman” succeeded by convincing 33% of the human judges that it, too, was human. Since then, companies like SAP, General Electric, and MasterCard have utilized machine learning and artificial intelligence (MLAI) to identify trends and insights, make predictions, and influence business decisions.
Artificial intelligence offers new opportunities to revolutionize operations in the financial services industry. Machine learning can process terabytes of data in seconds – volume which a horde of humans with older machines or methods could not process in a lifetime. The sheer power of modern computing in assessing an increasing number of variables with speed and accuracy gives a financial institution the ability to have rapid and strategic insights about customer behavior, reporting errors, and risk patterns. In the area of loan decisioning, this should lead to faster loan origination, fewer compliance problems with regulatory fines, and more inclusive lending overall.
We wanted to know: What are the risks and rewards of AI in lending, and is it an inevitable next step for compliance management?
Bob Birmingham, CCO of ZestFinance, and Dr. Anurag Agarwal, President of RiskExec at Asurity Technologies, explore.
Which Discrimination Would You Prefer: Human Or Machine?
Anurag Agarwal: With machine learning, the decision-making is supposedly agnostic to overt biases. Humans, by definition, are free thinking but with biases that interfere with decision-making, especially in lending scenarios.
Bob Birmingham: AI is not a product but a solution, another approach to problem solving using advanced mathematics, analytics, and data. It’s not a magic fix but can remove some of the human discretion that leads to discrimination.
AA: The problem is that computers could reinforce societal disadvantages by relying too heavily on data patterns. When a machine imports data, it creates an algorithm but cannot explain its methodology. It may introduce variations in the mix without understanding the broader implications.
Say you are trying to determine the risk of repayment for a borrower. The machine identifies physical appearance as a decision point. Based on data patterns, it detects that blue eyed individuals are more prone to timely repayment of loans. Noticing a correlation, it elevates that decision point to a higher influence in future lending. By taking an apparently agnostic data element and making a correlation it has in essence created discrimination because it doesn’t understand that blue eyes are mostly representative of Caucasians and the societal implications of making that borrower preference as a result.
BB: We’ve been here before as an industry. When regression modeling first came out it was confusing, flawed, and its accuracy questioned. At the end of the day, it was actually a more accurate process than purely judgmental underwriting because it followed a clear set of instructions to find answers. MLAI creates an opportunity for continued improvement to mitigate and eliminate discriminatory lending practices.
Risk: Machines can learn the wrong lessons from data.
Reward: Machines don’t have overt biases.
AA: AI is very new. We don’t yet know how to regulate it or what its long-term impact is. As soon as data began generating, we followed this “go forth and multiply” pattern. Now we have more data than we know what to do with. Also, a lot of this data is unstructured.
BB: Alternative modeling and alternative data are often lumped together, but that shouldn’t always be the case. MLAI alternative modeling may be used on the same data that current regression models run on and provide lift. Alternative data may be used with your existing models too but is more often used in conjunction with alternative modeling due to MLAI’s ability to handle large data sets. MLAI also has the ability to identify missing, erroneous, or wrong data and solve problems related to the inaccuracies.
AA: Alternative data, such as Facebook profiles, Twitter feeds, and LinkedIn pages, are already used in employment. HR departments use this publicly available information to determine if you are a qualified candidate. By knowing this in advance, you can change your online behavior to skew these data points in your favor. That’s the big question with alternative data in the lending space, along with privacy and transparency issues. We don’t know who gets to manipulate the data, where it comes from, or who has access to it. There’s no established system or standardized data points, which means most companies must follow their own proprietary lending rules, making it a regulatory free-for-all.
BB: While there are caveats, the rewards significantly outweigh the risks. There are many underserved individuals, businesses, and micro-borrowers with little or no credit that could benefit from alternative data. If an applicant doesn’t have good credit, or any credit at all, lenders can use MLAI techniques to paint a richer portrait about the borrower’s reliability using nontraditional factors like e-commerce histories, phone bills, and purchasing records. MLAI can open up the credit market, measure patterns, and fill in the data gaps, giving lenders a more holistic view of an applicant. Alternative data doesn’t have to be “creepy” data and doesn’t have to be social media data. Responsible and transparent use of alternative data to expand access should be encouraged.
Risk: The use of alternative data in credit underwriting raises privacy, transparency, and data integrity issues.
Reward: Alternative data can increase financial inclusion by granting access to capital to individuals and businesses with no traditional credit history.
Hackers vs. Trackers
AA: As we saw with Equifax, any time data is automated through the cloud there is the risk of data hacking. Now we are asking: who is responsible for protecting all of this data? Using such a detailed lending profile increases the necessity for data privacy and security.
BB: These are risks but the industry has always been responsible for protecting sensitive data. The good news is, the more information we have about a borrower, the quicker we can identify errors and anomalous behavior. This is a great consumer benefit. Using the same Equifax example, imagine if we could say “this person was affected, and these actions are very different from their previous activity. This is a red flag that their information was stolen.”
Banks raise red flags when uncharacteristically large payments are made or a card is used in a different country, but imagine how much more effective these alert systems could be with additional insights into an individual’s unique behavior patterns.
Risk: More data in the cloud means a higher risk to consumers in the event of a data hack.
Reward: Additional data can identify unusual behavior quicker, which is a benefit to consumers.
The Regulator’s Dilemma
AA: Regulators have a big job ahead of them figuring out how to regulate companies using this information. Many of these companies aren’t sure how to use it themselves.
BB: Initially, users of MLAI in high stakes applications have an obligation to educate the public, create a set of best practices for their industries, and be transparent. Financial inclusion is something regulators support and this technology can help lower the barriers to entry.
AA: In late 2017,the CFPB issued a no-action letter to Upstart Network, an online lending platform that uses both traditional and alternative data to evaluate consumer loan applications. The terms of the letter required Upstart to share data with the CFPB about its decision-making processes, consumer risk mitigation, and methods for expanding access to credit for traditionally underserved populations.
By studying these companies, regulators can better understand the impact on credit in general, on traditionally underserved populations, and on the application of compliance management systems.
Risk: Regulators are still learning about AI and how to properly monitor it.
Rewards: Regulators support consumers and want to make access to credit more inclusive.
The elephant in the room: Jobs.
BB: Typically, Financial Investigative Units (FIUs) are looking at alerts 24/7, researching a person, tracking where money is going, and determining if they should file a suspicious activity report. It’s the banks biggest compliance cost and their highest area of employee attrition with numbers ranging from 15 – 35% turnover in the FIUs. With AI, FIUs can focus on stopping financial crimes rather than toggling back and forth between 15 screens and parcelling through tons of information. This should free compliance personnel up to do higher value, more rewarding work in addition to driving more efficient outcomes for their organizations.
AA: I prefer a human loan officer over an automated machine-learning system. If something happened six months ago that caused your credit to go down, you can explain that to a loan officer. How can you convey that to an automated system? Those intangibles make human interaction necessary. I believe there is something valuable in interpersonal interactions that can never be captured in a truly automated fashion.
Risk: Headcounts in certain departments that are reliant on manual processes could decrease.
Reward: New and more rewarding job responsibilities will result in less turnover, but ultimately there is no replacement for interpersonal interaction for certain positions.
BB: Currently, the industry operates with a “look back” approach for Fair Lending, which doesn’t really work. The whole process of defending and explaining discrimination after a model is put into production feels outdated. Today, we can and should build AI models to proactively remove discrimination before ever putting a model into production.
AA: This technology is still very new. Medium to big size lenders should let the technology emerge and wait to see what the regulatory landscape looks like. Regulators still have a ways to go before any of us will fully understand what the future looks like for AI in the lending space.