Beware Certain Technology

website-pdf-download

Within the stories that are told to individuals and companies that are newly involved in Manufacturing Quality is the tale of the executive of a clothing manufacturing company. It seems he had been losing money on his clothing because his “cutters,” those individuals who cut the cloth that was used to manufacture the clothing, were very inconsistent. A rather large percentage of the clothes he produced failed to consistently conform to the manufacturing specifications. Despite his efforts at training these individuals the problem continued to exist.

He then hired a Quality Management Consultant who reviewed the problem and told the CEO that what he needed to do was install technology that cut the material. This way it would be consistent and he would eliminate the problem. The CEO followed his advice and at first was very pleased. Sales were up, rejects reduced and demand tripled. Subsequently he increased the amount of material he fed into the new “technological” tools. Soon however the problems returned. Very upset he called the consultant back into his office and angrily berated the value of his advice. The consultant then asked the CEO if he had seen the problems when reviewing his quality measurement results. He admitted that he really hadn’t looked at them because he had the technology in place and technology doesn’t make mistakes.

The consultant then went to the manufacturing floor and examined the cutting tools. What he found was that because of the demand level the individual’s positioning the material for the technical tool to cut the material had been adding more and more cloth to each cut. As a result, the tool actually spread out as it came to the bottom layers of the material. Hence the source of the problem. When told about this problem the CEO was distraught. “I thought this would solve my problem but since the technology was developed without the necessary calibrations, I have only made it worse,” he bemoaned.

Featured Sponsors:

 

This story is told not to discourage the use of technology, but to warn management that technology is not the ultimate solution of all process problems. That these tools need to be monitored to ensure they are working as they were designed and that sometimes the intervention of humans is required to make sure this happens.

The same advice applies to mortgage lenders. In reviewing the process of approving loans underwriting was designed to evaluate loans in order to determine the probability that the borrower would repay the associated debt. Over time income levels, credit history and liquid assets became related to a high probability of performance along with the amount of cash invested in the property by the borrower. These findings became the lynchpin of the rules that make up credit policy. However, it also became apparent that not every borrower fell neatly into the “approve” or “deny” category. The reality was that many borrowers whose credit profile did not necessarily fall into the “acceptable” category were in fact as good as or better than those being approved. Reverse occurrences, where borrower’s with apparently acceptable credit criteria were not necessarily those that underwriters would approve also occurred. Thus began the recognition that underwriting was not necessarily as transparent as thought and that the “art” of underwriting was just as important as the science. This approach was predominant until the idea that the “science” aspect of the underwriting process could be “programmed” to analyze a borrower in a scientific fashion. The primary technology tool utilized in this program was known as artificial intelligence (AI). While there are several different methods of developing AI the one used in the automated underwriting system was rules-based which relies on a series of underwriting rules and data to determine the acceptability of a loan application.

The advancement of artificial intelligence has not abated since it was originally introduced in the 1990s by then dominant Prudential Home Mortgage. This program was quickly followed by Countrywide and then Fannie Mae and Freddie Mac. These programs, developed in order to allow “plain vanilla” loans to be approved without having underwriters actually review the files, quickly advanced into more complex underwriting programs. Any loans that could not be approved by the system were sent to underwriters for review and, if acceptable, approved.

These refer loans were basically deemed to be the loans where the “art” of underwriting was needed to evaluate those issues or variances from guidelines that were best done by an individual who was experienced in the risk relationships associated with loan approval.

Unfortunately, the industry recognized too late that the technology, while clearly a valuable underwriting tool had flaws. The realization of the need to ensure the data entered was consistent with the application, as well as validating that the data itself was accurate, came too late to prevent the abuse of the program in the run-up to the Great Recession.

Numerous other aspects of the loan origination and servicing processes followed the path of automated underwriting. New regulatory compliance tools, credit scoring technology, automated valuation models and servicing modification models were only some of the tools that emerged as part of the new technological approach to mortgage lending. These tools were quickly adapted into the overall process. In many cases, this scientific approach overrode the “art” that was a critical part of what we did, and how mortgage lenders were able to expand homeownership while keeping delinquencies low. While these tools are without question extremely valuable to the industry, where in the process did the evaluation that is the “art” of mortgage lending come into play? The answer to that is clearly in the Quality Control function.

Quality Control reviews conducted by knowledgeable mortgage underwriters, closers and servicing personnel were designed to be the measurement tool that evaluates whether the science that has become underwriting when evaluating applications, credit, appraisals and calculations correctly, was working correctly. In addition, this analytic process determines if the “art” of underwriting was used when necessary in order to ensure that loans meeting the organization’s risk profile were met. Without this measurement method, lenders could produce loans that were far outside the risk parameters due to such issues as errors when programming the rules, the use of inaccurate data or a general failure to follow guidelines. This loan and process analysis was the only measurement that could identify and report on fluctuations in the origination and servicing processes. Lenders’ failure to pay attention or their failure to act on the QC findings concerning the misuse of the AI technology became obvious in 2007.

Since that time new QC requirements have been imposed by Fannie Mae, Freddie Mac, FHA and VA. The new rules have added an analytic layer on top of the error identification by requiring that lenders risk rank the loans with errors based on the taxonomy they have developed. As a result, vendors who specialize in quality control software have seen this as an opportunity to enhance their programs and add another layer of technology, using AI, to the process. Some vendors have taken it upon themselves to dictate the specific process issues to be reviewed by quality control and establish risk ratings for each variation for those rules. All of this is done automatically and in order to change the risk rankings, the analyst must take action by physically going in and changing the rating.

This approach is one in which the level of the technological support is actually a deterrent to the efficiency of the process as well as hampering Quality Control’s ability to effectively provide accurate risk evaluations to management. Here’s why.

As discussed earlier, sometimes underwriting is based on the “art” of evaluation and not the “science”. When this is the case, the artificial intelligence is unable to adapt and supposed errors found in the loan may actually be acceptable or considered less risky that what has been programmed into the AI results. On the contrary, specific risk attributes that a lender believes are risker may be missed because “the system does it for me.”    Furthermore, as a result of these mistaken risk evaluations, the overall rating on the loan, which must be used to calculate the total error rate, could be inaccurate and relay a false impression to investors and/or regulators.

Another issue with this AI based QC programs is the inability to allow for unique products such as those for private investors or loans where the lender has secured overlays for the standard agency products. In addition, there are product types, such as jumbo loans that may have their own specific risk-related processes that have not been included in the AI rules. To further confuse things these vendors do not allow individual clients to add their own specific questions or risk ratings to the program based on their stated desire to maintain the “purity” of the resulting product. These issues make it impossible for lenders to get a clear picture of their individual process issues and product failures.

While I believe very strongly that the quality control process needs some level of standardization and consistency within its analytic function, I have grave concerns about any individual company dictating the risk related to individual errors when there is no evidence of how this rating will impact the overall evaluation of the loan. Furthermore, none of these tools have shown in any fashion how the ratings are related to performance and/or repurchase risk. I find it highly unlikely that any individual has the knowledge to decide the risk associated with an error is high or low without understanding this relationship.

These vendors are correct when they say this additional technology can make the process faster. This statement however, is dependent on the company using the program “as is” and the QC staff does not evaluate any specific issue in light of the overall loan file or the company does not have any unusual or unique programs or products. Ultimately however, the additional technology utilized in these QC programs is adding on uncalibrated technology. Furthermore, the technology does not allow for ability for individualization of the necessary calibration by those individuals responsible for the output and the risk associated with it. As a result, the analysis and reporting that is provided to management is still flawed and will once again prove as irrelevant as the current loan level findings which management finds unsatisfactory.

While I applaud these companies for developing products that support QC, having companies spend money on technological tools that do not provide results that resolve the problem as intended do nothing more than delay the necessary realization that QC standardization and calibration are critical to making Quality Control function as the industry desperately needs.

About The Author

Rebecca Walzak

rjbWalzak Consulting, Inc. was founded and is led by Rebecca Walzak, a leader in operational risk management programs in all areas of the consumer lending industry. In addition to consulting experience in mortgage banking, student lending and other types of consumer lending, she has hands on practical experience in these organizations as well as having held numerous positions from top to bottom of the consumer lending industry over the past 25 years.