Here’s hoping you got exactly what you wanted for Christmas! You did want an Artificial Intelligent loan application, didn’t you? After all, based on what we saw at the convention in Denver and all the publicity since then, this is what most mortgage bankers had at the top of their list to Santa.
Without a doubt, the full force of this change is sweeping the industry. Whether it is called AI, Machine Learning, Digital Mortgages or has a unique name, lenders are looking for ways to add some means of streamlining the application process. However, not all of this is necessarily good news. In talking to numerous loan officers and technologists in the industry it appears we are headed for a consumer and secondary crisis of confusion. This potential confusion crisis has many different causes.
Industry members from all segments are asking if every automated option is really “artificial intelligence” and what it means to them. Some are concerned that the expense of developing such a system is far beyond their means. Overall however, they just are not sure what it is and what to do about it.
The quick answer to that question is obviously dependent on what you want a machine to do. Today the term artificial intelligence is discussed by many in our industry as a single effort which will take in information and use it to make an underwriting decision. In other words, an “expert system”. What’s so hard about collecting data and sending it to an automated underwriting system? We already do that. But is that really AI?
In its most simplistic form “AI” is a machine with human cognition, but there is much more to it than that. Just as individuals are not born with the knowledge they have today, a machine must learn. This means learning facts and, while humans are capable of innately associating the facts with the problems, it must learn how to associate these facts appropriately with the problems it is trying to solve. How much we teach machines is dependent on what we want it to do as well as how much information we give it. Machines with the ability to think like humans not only have millions of facts but can use these facts in a “Neural Network”. In other words, they can reason based on what they know.
Lesser types of AI include programs such as Expert Systems, Machine Learning or RPAs. While these programs are all based on the core expectation, their abilities are very different. For example, Expert Systems are a method of automated reasoning based on a very specific set of facts, rules and principles. Applying a user interface that asks a question, they will filter the data with the rules they have. If the rule is not there they cannot “reason” an answer.
Machine Learning development involves the construction of algorithms, which can learn and make predictions or decisions based on the data incorporated into the program. This type of AI is most often used in such areas as data mining since it uses statistical analysis to identify patterns. Using the data available it makes predictions about any new data it receives. Machine Learning algorithms include such statistical methods as regression analysis and decision trees. The education of this type of program is conducted by running large amounts of data through the model until it finds sufficient patterns to make an accurate decision.
RPA or Robotic Process Automation mimics user activities and can process structured and semi-structured data in line with the rules embedded in the program. It is highly deterministic. In other words, based on the information received the output will be what the rules have defined. These programs are frequently being used expert systems and typically include a “human-assist” factor that deals with exceptions that do not meet the rules in place. However, one of the values of using RPA technology is that this program can “learn” or add rules based on the user actions. For example, if such a program is evaluating the completeness of a document, the rule may say that all fields must include data. In this instance, a document with an empty field would become an exception. If, however, when this occurs in a specific field it is labeled an exception and the human interface says it is “OK”, the machine will adapt the rules to allow a document with this empty field to be labeled acceptable. This type of AI is used primarily with OCR technology and employed in operations, which currently employ a “stare and compare” process.
In developing any of these systems, it should be evident by now that the common and prevalent factor in any of these programs is the data. Types of data received by any organization include structured data, semi-structured data and unstructured data. A true AI program can utilize all three types of data whereas the lesser options are limited to structured data, and in some cases semi-structured data. Unstructured and semi-structured data is sometimes referred to as “dark data”.
Here is where the issues and risks begin. Everyone is familiar with the term GIGO or “garbage in; garbage out”. Without consistent and accurate data any “thinking” done by these programs is suspect. For years, the industry has struggled with developing a single source of data with a consistent definition and characters and still has failed to ensure that the data used is accurate. This however is not the only issue when it comes to data utilized in an AI effort. There are also a variety of other data issues. Among these are “noisy data”, “dirty data”, “sparse data” and “inadequate data”. These issues result in having conflicting or misleading data, missing or erroneous values, and incomplete data. While MISMO has done a tremendous job of structuring data definitions, programs being developed must have some type of pre-analytics that can be applied to raw data to ensure standardization, imputation and other basic techniques to ensure the quality of the data.
A second issue with data is its security and governance. Each entity must develop a method for ensuring that the data is managed properly. In other words, establish acceptable sources of data, determine how it will be analyzed for consistency and accuracy and identify who has control of the final data set so that it cannot be accessed by those without authorization. Data governance involves the management of this resource in a way that ensures its quality and use while maintaining the ability of the organization to capitalize on the opportunities presented.
In addition to data there are other critical issues that must be addressed by any company seeking to utilize AI technology. Chief among these are talent and culture. One of the primary issues faced is the talent pool necessary to sufficiently manage, develop and execute analytic projects involving machine learning. Data scientists, those individuals skilled in computer science, math and domain expertise are necessary to develop and maintain these AI programs. However, there is a shortage of data analytic talent in this industry. On the down side, many employees will quickly lose relevance in the workplace as much of the tasks they perform are taken over by machine learning technology.
Another potential employee change-over will be found in the level of consumer interaction. With the advancements of expert systems and virtual assistants, consumers can ask questions, receive answers and proceed with their applications without a human interface. While the feedback so far is mixed on who is most likely to use such an approach, Loan Depot has already engaged a company to develop an interactive consumer facing system to assist potential customers in this process. Another company, Neutrino Financial Services is developing a program that will allow consumers to obtain answers to questions and start the application process. When ready they can then select a company from a broad listing of lenders, for whom they want to apply. All this without the use of a loan officer. The applications taken in this program will then be transferred to the selected lender, thereby giving the consumers more choice and the lender more access to a variety of applicants.
If the use of AI, in all its various forms, is to be successfully implemented in this industry, a cultural change also must occur. The long-lasting impact of any AI implementation can only be made when a fundamental shift in an organization’s culture takes place. Although there has been much discussion about data driven organization, little of the effort to make this happen has occurred. Job types and job descriptions must change along with business processes and technological solutions are necessary. In addition to investing in the technology, businesses must invest in the appropriate training of staff and process redesigns if AI is to be successful.
However, the application process is certainly not the only area where AI will have an impact. We are already dealing with AUS systems and the accompanying deletion of underwriting staff. This however is just the beginning as much of the clerical functions can use Machine Learning AI with a small contingent of personnel available to deal with the exceptions. Then of course there is the closing process. Today we have acceptance of digital signatures and electronic delivery of the closing documents. While there are some states that will not yet accept digital signatures, this too is rapidly changing. Of course, the biggest issue at closing is yet to be tackled; that of explaining what all these documents mean and what they require of the consumer. However, with a program such as the one being developed by Neutrino, consumers will have access to the resources necessary to address these questions by phone, tablet or laptop.
One of the most obvious uses of AI is in the servicing environment where most of the administrative work is primarily clerical. The potential for the use of AI, including expert systems and machine learning is tremendous. The processes that take place here are prime targets for this type of change and would reduce the overall costs of servicing significantly. Default management could also benefit from these programs as “Bots” are developed to predict potential delinquents much sooner through statistical analysis of payment history and changes in credit status. Assistance with notifications to borrowers would also establish a more beneficial arrangement of calls and relieve some of the staff when applicable. Another usage would be in the identification of property value changes.
Utilizing the data collected and standardized will assist in congregating loans to form MBS pools. Because of the data quality available from a data-driven organization’s focus, the secondary market will have confidence in the information provided. In addition, utilizing the existing risk model to establish operational variance risk, these institutions will be able to effectively price loans. By having loans accurately price for the performance risk will make them a more valuable investment tool and enhance the overall buying and selling or these assets.
There is no doubt that AI is coming to the mortgage industry and there is also no doubt that in order to reap its benefits, the industry must change. Only when we recognize that artificial intelligence brings with it, costs and risks that will redesign the familiar, and accept what is to come, will this intellectual achievement be achievable.
About The Author
rjbWalzak Consulting, Inc. was founded and is led by Rebecca Walzak, a leader in operational risk management programs in all areas of the consumer lending industry. In addition to consulting experience in mortgage banking, student lending and other types of consumer lending, she has hands on practical experience in these organizations as well as having held numerous positions from top to bottom of the consumer lending industry over the past 25 years.