Staying The Course

In talking to individuals who attended the recent technology conference I was somewhat surprised that many brought up the fact that the industry was not only ready, but looking for the opportunities to run their companies using robot or “BOT” technology.

Featured Sponsors:


While the idea that companies would be run completely by some type of “bot”, whether intellectually, or physically is still far from reality, this conference seemed to be giving signals that we are headed that way. In reviewing various summaries of the conference, it was readily apparent that those involved in the technological side of the mortgage industry are looking toward the implementation of joining current technologies to make this happen. In other words, having progressed through data consistency, compliance issues, rule-based artificial intelligence and OCR opportunities, mortgage technologists are looking forward to combining them into the ability for the technologies alone to conduct functions that are currently being completed through an interface with company personnel. While this “BOT” approach, is already in place in parts of the industry, the mortgage application process comes to mind, it has not yet reached the potential envisioned in the early 1990’s when the first automated underwriting tools were developed.

Featured Sponsors:

While no one, even those who believe strongly in the potential of AI, think the industry is on the brink of replacing humans with a machine, there are many areas where this approach can provide significant benefits. Many of these areas have been within the scope of industry visionaries for years. One that I am most familiar with is the one I generated for automated quality control.

Featured Sponsors:

In 1995 I co-authored an article in Mortgage Banking magazine about the changing role of QC in the new area of automated underwriting systems. Prominently featured in that article was the idea of pre-funding QC which could be built off the systems created for underwriting automation. Even though it took a catastrophic collapse of the mortgage industry for anyone to recognize the value, lenders are now required to conduct such a review. However, it has not played out the way I envisioned it in that article where automation would provide the function.

Despite the rejection of the concept I plowed ahead with visions of automated quality control and the potential value it had to the industry. In my mind the existing quality control function would be replaced by a rule-based engine that would test each step of the process at it occurred. This program would then culminate in a score that would identify performance risk due to the lender’s mistakes or variances from guidelines. This score, and the underlying data, could be used to identify process weaknesses as well as give investors an accurate risk of the loans they were buying.

Eventually I did create this program based on a risk model that I developed and which has a patent pending. Unfortunately, I tried to commercialize it in the early 2000’s when the only concern was the avarice opportunities in the industry. The program has been sitting waiting for the right time and place. By staying the course I may finally see my vision a reality.

About The Author

Rebecca Walzak
rjbWalzak Consulting, Inc. was founded and is led by Rebecca Walzak, a leader in operational risk management programs in all areas of the consumer lending industry. In addition to consulting experience in mortgage banking, student lending and other types of consumer lending, she has hands on practical experience in these organizations as well as having held numerous positions from top to bottom of the consumer lending industry over the past 25 years.

The Good, The Bad And The Reality Of AI


When I was getting my Masters in Business Administration, one of my professors lectured about the future of business. One example he used was from a futurist’s ideas on the factory of the future. According to this individual the factory of the future would have two employees: a man and a dog. The man would be there to feed the dog and the dog would be there to make sure the man did not touch anything.

While this tale may seem laughingly far-fetched, conversations held at the recent technology conference seemed to indicate that this scenario is within the realm of possibility. Numerous discussions were held about the use of artificial intelligence (“AI”) within the mortgage banking industry and ranged from rules-based programs to utilizing programs such as optical character recognition (“OCR”) and validating document collection to produce the first “bots” within the industry. But is this really AI or is real artificial intelligence the actual use of computers that can emulate human thinking processes and contain human drives such as hunger, power and self-preservation?

Controversy abounds on the subject and not just with potential users but among the most advanced thinkers in this area. Their thoughts and beliefs are wide-ranging from the concept that AI will likely play a part in human extinction to those that believe it will improve people’s lives and give them more family and leisure time. An article in April’s Vanity Fair magazine quotes Elon Musk, the developer of Tesla cars and cost-efficient rockets allowing for the settlement of Mars, as believing AI is humanity’s “biggest existential threat.” On the opposite end of the scale, Ray Kurzweil, a futurist, has predicted that we are only 28 years away from the point where AI will far exceed human intelligence and humans will merge with this super intelligent program to create the hybrid beings of the future.

Featured Sponsors:


While arguments continue at this theoretical level, most of these individuals agree that one of the greatest drawbacks to the full use of artificial intelligence is the interaction currently necessary between humans and these tools. In other words, for these programs to work, a human must verbally ask a question, make a statement or key in information. This problem goes away however with the merger of biological intelligence (human thought) and machine intelligence. To accomplish this, companies are currently working on an injectable mesh, called a neural lace, into the brain that can flash data from your brain wordlessly to your digital device or to the cloud, thereby creating unlimited computing power.

With the on-going merger of AI into businesses there is also concern about how it will be managed and controlled. Current public policy on AI is largely undetermined and the software is largely unregulated. Some of the biggest technology companies have taken it upon themselves to develop a partnership on the subject in order to explore the full range of issues, including ethical concerns. The European Union is also deeply concerned and is considering such legal issues as whether robots have personhood or should be considered more like slaves as found in Roman law.

But the question overriding all of these issues appears to be what exactly is artificial intelligence? Is it simply a bot-like program that runs rules that do simple labor intensive work or is it actually the ability of machines to think as humans and take over the entire workload of any business. And if so, what does this actually mean to employees in those businesses? Will their jobs be replaced with robots that not only collect information and compile data for a rules-based engine but actually make decisions based on the data fed into the machine’s intelligence.

Featured Sponsors:

Most importantly to mortgage lenders, what does all this mean to our industry? While the fun and intellectual stimulation that comes from brain-storming these ideas generates lots of enthusiasm, if these concepts become reality, we need to be prepared to utilize them to our advantage and not be thwarted by extensive costs and back room operations.

One way to envision how these AI programs will impact the industry is to look at what has been happening in similar operations. An article by Penny Crossman in the March 16, 2017 American Banker entitled “All the Ways AI will slash Wall Street jobs” gives some insights into what we might expect. According to the author, Opimas, a capital markets consulting firm, projects that by the year 2025 artificial intelligent technologies will reduce employees in the capital markets profession by 230,000 people. Furthermore, spending for AI-related technologies is expected to be more $1.5 billion and will reach $2.8 billion annually by 2021, just four years from now. This number does not even include the start-ups that capital market firms will invest in during this period. All of this expense is expected to be offset by a 28% improvement in their cost-to-income ratios.

So where are these programs being placed? The first functions being replaced by AI technology are process-oriented jobs. These jobs are actually being replaced by lower level AI functions that are programmed to do such things as look up documents, find data and compare multiple data sets.   In addition to these process oriented jobs, those whose function is to conduct analytics on the data are also being replaced with such technology as machine learning functions. In this “deep learning” technology, AI programs digest large volumes of real-time data within a very short period of time and then “learn” to find patterns that provide insight and direction at a speed humans can’t begin to match.

Another area of capital markets feeling the impact of AI is front office sales personnel. Since initiating AI technology in this area there has been a 20% to 30% drop in headcount. In addition, many jobs in the middle and back offices are also feeling the impact. Since the majority of these jobs are processes that are connected by human manual intervention AI that brings with it image recognition can replace this human activity.

Featured Sponsors:

Compliance concerns that resulted in significant headcount increases are now being taken over by AI programs that validate specific documentation and provide a more holistic view of the regulatory risk and organization compliance trends. This is one area where IBM’s Watson is proving extremely valuable.

The implementation of AI in capital markets gives an excellent overview of how this technology can be implemented in the mortgage industry. Currently, we use some lower level rules-based programs to conduct underwriting as well as OCR usage in some back-office functions. Applying the applications discussed previously, many, if not most of the job functions being conducted today by humans could in fact be replaced with future AI programs.

One good example is the use of AI to replace loan officers for taking applications and collecting data. The Rocket Mortgage program in use today by Quicken Loans is just one example. Furthermore, most of the data collection and organization process that is labeled “processing” is also easily replaced with existing sites that offer independent validation of the information utilized in making decisions.

An area that is also ripe for AI application is the title and closing function. Using OCR, data comparison and document production, can easily be completed while the risk of mistakes or problems at the closing table could be handled in real time.

Back room operations can also be easily incorporated into AI functionality since it is very much a data recognition, document collection and validation effort. Post-closing functions which now take time and massive amounts of human labor can not only be streamlined, but the data collected can be used to revise and improve the processes themselves.

While what may appear to be already included as an AI function, underwriting is actually where some of the best deep learning artificial intelligence is applicable. Since 1995 we have been using rule-based technology to conduct what we considered AI, but instead are simply automated underwriting programs. Deep learning AI offers the industry the solution that has plagued it since its inception, that of identifying the true performance risk of loans.

Today’s credit risk function continues to use static attributes to develop, expand and or shrink credit policy without knowing the potential impact on any individual applicant. This credit risk stalemate has resulted in lending programs that reject applicants that may in fact prove to be credit-worthy borrowers. This can easily be seen in such programs as affordable housing and minority lending. Using deep learning, rather than simply applying standard credit policy to an application, AI can conduct an analysis of the applicant in comparison to all probabilities of performance and decide to approve or reject. In other words, credit evaluations would be individualized for every applicant. In addition, performance probability would be the yardstick by which pricing is tabulated.

Servicing is of course, primarily manual back-office functions that AI can address. Once again a deep learning application can contain any information on taxes, insurances and related issues, transfer funds if and when needed as well as provide an escrow analysis, tax statements and individual billing statements.

Just this brief recap easily demonstrates the value of bringing AI into the mortgage lending environment. The question is “at what cost?”   There is no doubt that the advancements in AI would have the same negative impact on mortgage lending employees. In fact, the annual convention might just turn out to be a dog and a man. However, as shown above, mortgage lending is not the only industry that will feel the same impact.

There are of course risks. One significant enough to delay implementation of AI by some firms is the risk that the technological intelligence could misinterpret input information and make decisions based on that information that would be disastrous to the company. One such example has already been experienced by Wall Street to a small degree when a mention of Anne Hathaway in the news resulted in a bump in Berkshire-Hathaway’s stock. Now known as the “Hathaway Effect”, companies are implementing practices such as running validation scenarios and are placing restrictions and stops on critical process points. This of course requires human oversight and runs headlong into the issue of AI and the reduction of human jobs in the industry.

The discussion over job elimination and creation however needs to be a much broader discussion around the impact of AI in the economy overall as well as in our industry. We have to think about what the massive reductions in employment opportunities means and what type of jobs will be created as current competencies are becoming less relevant and those trained in AI technology become harder to find. This change from the use of human intelligence to artificial intelligence is similar to the change undergone by those individuals who today are labeled “white working class” as large scale manufacturing replaced humans with more advanced technology and those individuals who were educated to do manual and factory related jobs became unemployable. Those that were smart enough to take advanced training and education in the field have found new jobs, but those that haven’t are left feeling angry and disenfranchised.

New skill sets that will be in demand for AI revolve around software engineering and data science. This is of course a given. But there will also be the need for a hybrid of business and digital skills which involves individuals who are knowledgeable about the business, who understand the digital environment and know how to benefit the business by continually improving its digital footprint.

While AI continues to advance, and becomes more accepted in the industry, it would be wise of us to think of when, where and how it becomes the most advantageous to the human side of the business equation. Using AI also means finding what our function as humans involves. Are we the masters of the technology or will we become evolve to a position seen by Steve Woznick when he said, “I now feed my dog steak because I see humans being the pets for robots and I want my robots to believe pets eat steak.”

About The Author

Rebecca Walzak
rjbWalzak Consulting, Inc. was founded and is led by Rebecca Walzak, a leader in operational risk management programs in all areas of the consumer lending industry. In addition to consulting experience in mortgage banking, student lending and other types of consumer lending, she has hands on practical experience in these organizations as well as having held numerous positions from top to bottom of the consumer lending industry over the past 25 years.

What Should Trump Do About The GSEs?


Great news! It was announced that the housing market has fully recovered from the debacle of the Great Recession. While good for the housing industry as a whole, the better news for mortgage lenders is the elimination of any new regulations.  Having dealt with two of the most difficult fallout of the crisis can we now wipe our hands of the problem and get back to running business the way we did prior to 2004? Well, not quite. There is still the issue of Fannie Mae and Freddie Mac to address.

Everyone, including consumers, is aware that these entities were up to their eyeballs in the lending programs between 2004 and 2008. Despite the fact that they each provided an AUS for use by lenders, the rules were written so that it would basically approve every loan submitted. Staying competitive seemed to be the only criteria. Their punishment however was slightly different. They were taken into government conservatorship after being bailed out by the federal government. And here they have remained. But now it seems that the time has come, or the industry has determined that it is time to finally resolve this problem. So what’s to be done and when.

The interest in answering this question seems to have generated a number of ideas about what to do and when to do it. For example, on a recent conference call one speaker, when asked about the GSE’s situation stated that he did not see any political catalyst to do anything at the present time. One of the primary reasons is the belief that the FHFA and its director, have a reasonably well run organization which allows for a delay in any action. He went on to say that while some changes in the charter and mandate are likely to occur, the issue of repaying taxpayers the $187B owed by them has to be addressed.

Featured Sponsors:


Prior to these statements, articles in industry magazines were suggesting that there seems to be some disagreement on exactly what to do with the GSEs. While most agree that there is a continuing need for the government to be involved in the secondary market, whether this is the current GSEs or some other type of entity seems to be at the heart of the debate. One popular idea is to create a public utility with government control of all fees and charges through regulation. Paramount in this approach is the expectation that this entity will continue to offer access to all lenders and provide them with equal pricing.

With one of the GSE mandates being to provide affordable housing options to working class families, those involved in organizations that monitor this also want to ensure that this focus continues. However, based on the latest HMDA data, which shows that the greatest correlation to denials for non-white, Hispanic males or females was whether they were to be sold to one of the GSEs. This issue will continue to be burdensome to the agencies and despite the fact automated systems will continue to evolve, the on-going use of rules-based algorithmic models will do nothing to ensure the viability of any lending program, especially for these affordability issues. One thing seems consistent through all of these discussions. In whatever structure this government involvement takes place, it cannot be allowed to pose such an enormous risk to taxpayers again, nor can it ever again place the broader financial system at the level of risk it did during the Great Recession.

Another voice in the on-going discussion of what to do with Fannie Mae and Freddie Mac is the Mortgage Bankers Association (MBA) who recently published an introduction of its forthcoming proposal for addressing this issue. Based on the document it is clear they support a new secondary market approach. This initial document places an emphasis on the role of the federal government and the necessity of preventing this new “market” from fluctuations due to political turmoil, favoritism and/or changing administrations.





The GSEs cannot be allowed to pose such an enormous risk to taxpayers again, nor can they ever again place the broader financial system at the level of risk it did during the Great Recession.

While most agree that there is a continuing need for the government to be involved in the secondary market, whether this is the current GSEs or some other type of entity seems to be at the heart of the debate.

The focus of any congressional actions, according to the MBA position, should be to promote liquidity that stimulates investor purchases of mortgage-backed securities and prevent the taxpayers from taking on the risk of these securities. There are four critical elements they have identified that they believe must be part of any long-term solution. These include establishing the value of combining competition and regulations; providing equal access for all lenders regardless of size or structure; enhancing their current public mission for promoting affordable housing and finally, to maintain the level of liquidity for both single and multi-family housing. Furthermore, the MBA has included in this statement support for allowing the creation of additional privately owned entrants to compete with the reformed Fannie Mae and Freddie Mac.

These entities, including the new Fannie Mae and Freddie Mac would be organized as private utilities with a regulated rate of return and a public purpose of providing credit to the conventional mortgage market. In addition to this “end state,” MBA has identified a series of “Guardrails” that must be implemented to reinforce this new mandate. Among these are such standards as the maintenance of a “bright line” between the primary and secondary markets; these utility companies must be standalone to prevent any undue influence (such as those from big banks) and the resulting utilities should be regulated as a Systemically Important Financial Institution (SIFI).

So, What’s Missing?

Although these ideas and discussions are preliminary presentations of the more in-depth concepts discussions and legislative actions to come, the common thread in all the current offerings is the focus on Fannie Mae and Freddie Mac’s role in a new secondary market functionality. Emphasis has been placed on the idea that these new utilities will be aggregators of conventional single family and multi-family loans. So, where does that leave the other activities of that these agencies now control?

Featured Sponsors:


One of the most obvious is that of creating and administering credit policy. While one of the “guardrails” listed in the MBA’s GSE Reform Principles and Guardrails, released on January 30, 2017, is the operation and management of “…the government’s QM-like single family eligibility parameters…”. What is not clear is whether the credit policies to be put in place will be unique to each utility or whether there will be one set of guidelines for everyone. One question left unanswered is whether the QM exception in place today will remain past the current stated end date.

As everyone is aware today, Fannie Mae and Freddie Mac compete through the variations in these guidelines, even to the point of differing calculations for determining income for rental properties. If this level of diversity in guidelines was to exist in several different utilities, it seems likely that it would cause confusion as well as set up any number of these entities to take risks that are would not be acceptable. Furthermore, the ATR/QM standards are the result of the CFPB regulations that the current administration as well as congressional opponents have vowed to eliminate.

In addition to these problems, while Fannie Mae and Freddie Mac have as a foundation of their charters the requirement that they expand homeownership opportunities for potential borrowers requiring more affordable housing, the reality is that this has not occurred. All one must do is review the past year’s HMDA data, including 2015, to see that the denials for non-white, Hispanic, men and women are most highly correlated to the intent to sell the loan to Fannie Mae or Freddie Mac. So, the question remains, if the current guidelines are failing to produce the results required of these agencies, how does adding more of the same expand that opportunity? On the other hand, will the emergence of very divergent guidelines individualized from each utility cause too much confusion and misdirection for the industry to handle?

Another expensive and antiquated program that are part of the agency requirements is Quality Control. Following the mortgage meltdown and the abundant evidence that quality failures were a direct cause of the mortgage failures, both agencies introduced loan quality initiatives. Unfortunately, these programs did not address the primary issues of ensuring the processes that produced these loans were under control, but continued to rely on controlling each loan through an inspection process both before the loan went to funding as well as afterward.





Despite the fact automated systems will continue to evolve, the on-going use of rules-based algorithmic models will do nothing to ensure the viability of any lending program.

The issue of what to do with Fannie Mae and Freddie Mac will continue to play out for some time to come.

While the intent to “discover” problems prior to funding was a noble effort, the result has been companies implementing 100% reviews on approved loans without a clear understanding of what to do with the results, how to cure many of the problems or obtaining updated information. Furthermore, the post-closing QC still occurs 90 days after the loan is closed and the sampling programs acceptable remain biased and the results incomprehensive.   To add insult to injury, these additional reviews double the cost while adding little if any, value.

With the adaptation of multiple utilities will the existing QC requirements remain, will each aggregator have the option to determine how they expect this analysis process to be completed or will every lender can design their own? Regardless of how it plays out, the value of quality control can be added in by ensuring that pricing reflects the product quality sold.

Last, but not least is the issue of compliance with regulatory requirements. Up to this point the GSEs have deliberately avoided evaluating individual compliance to regulations. While there are some valid reasons for this approach, with the new utilities, will this continue or will utilities decide to become involved with this process.

Featured Sponsors:


The issue of what to do with Fannie Mae and Freddie Mac will continue to play out for some time to come. However, forward thinking originators and servicers will also be scoping out how any of these options can impact their business and be prepared when it does happen.

About The Author

Rebecca Walzak
rjbWalzak Consulting, Inc. was founded and is led by Rebecca Walzak, a leader in operational risk management programs in all areas of the consumer lending industry. In addition to consulting experience in mortgage banking, student lending and other types of consumer lending, she has hands on practical experience in these organizations as well as having held numerous positions from top to bottom of the consumer lending industry over the past 25 years.

Fair, Unfair And Deceptive

With the announcement, earlier this year about the latest data additions to be included in all future HMDA reporting, the industry has been heavily focused on making sure that the necessary data is available within loan origination systems. Furthermore, the loan application form, commonly referred to as the 1003, has been updated to ensure that all this data can be collected from the borrower(s). This additional information, in conjunction with what is already collected, form the basis of regulators Fair Lending reviews.

Featured Sponsors:


Fair Lending, is the federal regulation that requires all lenders to treat every applicant equally. For depository institutions, their lending patters must demonstrate that they offer mortgage opportunities in the communities in which they accept deposits. Additional analysis is also conducted on the areas in which a lender typically lends. This has traditionally been known as the lender’s footprint and is measured by racial population distributions within specific metropolitan statistical areas or MSAs. In other words, if an MSA is 50% Hispanic, regulators would expect to see that 50% of your applicants are Hispanic. This they believe demonstrates the “fairness” of your lending practices. There are however some very “unfair” issues associated with this analysis, many of which will more than likely be exacerbated by the collection of additional data and the scrutiny of the CFPB.

Featured Sponsors:

The most obvious of these is the poor quality of the data. Although the submission process includes quality and validity checks, inaccurate and/or inconsistent data is rampant. While most lenders work diligently to ensure good data, there have been instances where manufactured and calculated data have been used. Furthermore, until this past week’s announcement, there has never been a way to identify if all required lenders have even submitted their data. If data is submitted late or corrected and resubmitted, the changes never make it into the overall HMDA database for the year. Imagine one lender’s surprise upon finding out that the entire LAR they submitted one year was not included at all.

Featured Sponsors:

Unfortunately, even the bad and or missing data included in the HMDA database is used to analyze lenders. For example, not all applications have the monitoring data completed and since it is the borrowers’ prerogative to complete, few, if any lenders have all the race gender and ethnicity data for every application. This can lead to some very unfair conclusions. Recent comparisons of the number of loan applications compared to these completion of monitoring data found that these numbers just don’t add up. For example, if a lender has 10,000 applications but the breakdown by race shows that only 37% were minority, does that mean that 48% are white? If so, and the population is the MSA is 52% minority does this mean the lender is failing to meet regulatory standards? Without knowing the race of the remaining 15% of the applicants, it is impossible to tell. Yet this is a major part of the regulatory review. Isn’t this a bit deceptive on the part of the regulator?

Finally, regulators and lenders alike must reconsider the use of comparative footprints in conducting this analysis. When lenders and banks were primarily regionalized this may have made sense but with the expansion to nationwide lending and the use of electronic applications, this model is unreliable and in fact deceptive when reaching any conclusion. This must be changed if we are truly to identify any discrimination practices.

The issues identified here are clear indicators that the regulators are not accurately measuring a lender’s Fair Lending, but instead are conducting unfair and deceptive analytics themselves. To protect themselves, it in in every lender’s best interest to know more about their HMDA data then any regulator does.

About The Author

Rebecca Walzak
rjbWalzak Consulting, Inc. was founded and is led by Rebecca Walzak, a leader in operational risk management programs in all areas of the consumer lending industry. In addition to consulting experience in mortgage banking, student lending and other types of consumer lending, she has hands on practical experience in these organizations as well as having held numerous positions from top to bottom of the consumer lending industry over the past 25 years.

Consumer Evaluations Could Help Servicers

Recently the CFPB issued a proposed directive to servicers requiring the development of a rating system that would indicate to consumers how efficiently and effectively the servicer addresses complaints. They had suggested a five-star rating system which could be published and available to anyone interested. The response from servicers was quick and extremely negative. This idea to them was an anathema. However, before the issue came to a head, the current administration declared that any new regulations were to be withdrawn.

Featured Sponsors:


However, one must wonder why mortgage servicers were so vehemently opposed to this idea. Was it because they have failed to address complaints appropriately in the past? Did they believe that this requirement would force them to expose negligent or unsatisfactory actions? Were they concerned that by allowing this information to be given out they would somehow diminish the value of their organization? Or is it because they still don’t recognize borrowers as their customer, believing instead that their sole purpose is to serve investors rather than consumers?

Featured Sponsors:

The idea of a consumer rating system is not new. J.D. Powers is well known for its rating programs and awards given to companies who score well on these programs. The fact that a company has received such as award is frequently a central part of their marketing campaigns.   Quicken Loans continuously brags about the number and frequency of their J.D. Powers awards as does Delta Airlines and winners from other industries. What is so abhorrent about such a system for servicers?

Featured Sponsors:

One reason has been the lack of a standardized approach to evaluating responses to consumer complaints and/or inquiries. Unfortunately, developing a taxonomy that would grade responses cannot be developed until there is some standard acceptance of what should happen and in what time frame. After all, the Chinese say “Any road will take you there if you don’t know where you are going.” Right now, there appears to a consistent lack of understanding and/or agreement of what constitutes ‘doing it right”.   Once that is determined, levels of performance can be developed. For example, if it is agreed that satisfactory performance is responding to the consumer within the required timeframe with an answer to a question posed or information provided, then actions that are better and worse can be described and a positive or negative assigned.

Another statement that keeps popping up is the fact that consumers are not going to like what the servicer did or the answer to their question. Since this is bound to happen, the servicer will appear to provide unsatisfactory service when in fact they were just complying to the required servicing standards. Of course, every company expects this to happen. Delta hasn’t flown every airplane on time and without incident and there are ways to deal with one off issues when analyzing the results.

Recognizing that the benefits far outweigh the negatives must happen before any of this can get started. Having data on which consumer activities are beneficial and positive would allow servicers to determine how to duplicate this same approach on those that appear to be a problem. Living in a world where management is blind to the positives and negatives of their organization is ridiculous when an industry devised and properly managed consumer evaluation program can win loyal customers and maybe even impress investors as well as regulators.

About The Author

Rebecca Walzak
rjbWalzak Consulting, Inc. was founded and is led by Rebecca Walzak, a leader in operational risk management programs in all areas of the consumer lending industry. In addition to consulting experience in mortgage banking, student lending and other types of consumer lending, she has hands on practical experience in these organizations as well as having held numerous positions from top to bottom of the consumer lending industry over the past 25 years.

The Changes Ahead


With the recent announcement by the new administration that the Dodd-Frank Act would be reviewed, lenders began anticipating the reversal of the numerous regulations put in place after the mortgage crisis of 2008-2009. The creation of the Consumer Financial Protection Bureau (“CFPB”) brought with it a plethora of new requirements and restrictions on all types of consumer lending, not just mortgages. However, mortgage lenders, identified as the drivers of the financial collapse, were without a doubt the hardest hit, and not just with regulations. The CFPB exams, which many times resulted in penalties and fines for lenders, were particularly onerous as were the costly technological revisions that the new regulations required. But before we pop the bottles of champagne, we need to take a minute and consider what exactly is likely to happen and what these changes will actually mean to mortgage lenders.

Featured Sponsors:


Recognizing that any changes in the current regulatory environment must come through Congressional action means that regardless of how urgent we feel that these changes are needed they will take time to accomplish. While the expected timing of the changes varies greatly, we can also expect a tremendous amount of push-back from proponents of the current regulatory environment. In addition, the proposed changes will require significant legal review as the depth and breadth of the current regulations impact much more than mortgage lending. One other thought to keep in mind is the old saying “be careful what you wish for, you just may get it.” So, while there is intense anticipation of the elimination of Dodd-Frank, we need to be prudent and carefully evaluate what is included in each proposal and thoroughly evaluate the potential impacts, both positive and negative. Having said that, let’s look at what is on the table already.

One of the first agenda items is the removal of the current director of the CFPB, Richard Cordray. The current law calls for the agency to be funded through the Federal Reserve and be run by a director appointed by the president with no oversight by Congress. This has been a very sore spot for lenders as they perceive the current director’s actions to be especially punitive, if not downright malicious, toward mortgage lenders. When the results of the litigation involving PHH were made public, the fact that the court felt the current structure was unconstitutional, generated great anticipation that the new administration would immediately fire Cordray and replace him with someone of their choosing. This of course did not happen as it was widely anticipated that Cordray would not simply walk away without a legal battle to retain the position and the existing structure of the bureau. However, the week of January 31st saw the introduction of a Senate bill that would replace the single director with a five-person bipartisan committee.

Featured Sponsors:

The Choice Act

In other action, Representative Jeb Hensarling, Chairman of the House Financial Services Committee, introduced his bill to dismantle the Dodd-Frank Act. In conjunction with statements made by members of the administration, the bill has been widely announced as the “newly improved” Dodd-Frank. This bill does not entirely dismantle what is currently in place but would make changes to some of the lesser known elements of that regulation. For example, this bill, known as the Financial Choice Act would end taxpayer funded bailouts of large financial institutions; relieve banks that choose to be “strongly” capitalized from regulations that are viewed as preventing growth; impose tougher penalties on those that commit fraud and hold federal regulators more accountable for the financial health of the country. This House bill would also replace the director position with a bipartisan committee and changes the name of the organization from CFPB to the Consumer Financial Opportunity Commission (CFOC). An evaluation of this bill by Fitch concluded that this new commission would retain many of the elements of the current CFPB but would put reasonable controls over its authority by mandating congressional oversight and appropriation requirements.   It has also been noted that the bill would in fact widen the mandate of the original agency and provide more protections to consumers.

While all of this sounds favorable to “big banks” there does not seem to be much in it for the independent financial institutions and/or independent mortgage lenders. The expectation from this legislation for non-banks appears to be limited to the lowering of compliance costs and potentially fewer fines. It is possible that this legislation could drive an even bigger wedge between those that benefit and those that get very little relief from its passage.

One thing to keep in mind is that this bill must be vetted through both houses of Congress where numerous changes and addendums are likely to occur. Furthermore, it is important to note that there are less than two years before all members of the House of Representatives and one-third of the Senate will be up for re-election. What we don’t know at this time, is what resistance this bill will receive in both houses of Congress.

Featured Sponsors:

Overall consumers and Consumer Advocacy groups appear to be very pleased with what the CFPB has accomplished. If they see this bill as a weakening of the protections provided under the CFPB will the response be sufficiently derogatory to forestall or significantly change the bill. In addition to consumer groups, realtors are also keeping an eye on the bill. While not impacting them directly, the resurgence of the housing market has been extremely beneficial to them and they are anxious not to change anything that will ultimately dampen home-buyers’ enthusiasm.

Fannie Mae and Freddie Mac Reform

Another topic of discussion within the past week was the introduction by the MBA of its position on what should be done with Fannie Mae and Freddie Mac. While this is only one of the entities expressing its thoughts and recommendations these days, it is clearly one of the most thoroughly vetted positons and is expected to carry a great deal of weight with Congressional leaders.

Based on the document the MBA supports a new approach to the secondary market. One point of emphasis in this proposal is the role of the federal government and the necessity of preventing this new “market” from fluctuations due to political turmoil, favoritism and/or changing administrations.

The introduction document identified four critical elements that the MBA task force concluded must be part of any long-term solution. These include establishing the value of combining competition and regulations; providing equal access for all lenders regardless of size or structure; enhancing their current public mission for promoting affordable housing and finally, to maintain the level of liquidity for both single and multi-family housing.

In a shift, away from the exclusivity present in the current market, the MBA recommended that the new Fannie Mae and Freddie Mac be organized as private utilities with a regulated rate of return and a public purpose of providing credit to the conventional mortgage market. They also indicated that these entities should not be the only such organizations available for aggregating and securitizing loans. The document encouraged the development of private utilities that would compete with these agencies thereby allowing for a more competitive secondary market.

MBA’s position also included a series of controls which they labeled “Guardrails” that must be implemented to reinforce this new mandate. Among these are such standards as the maintenance of a “bright line” between the primary and secondary markets; these utility companies must be standalone to prevent any undue influence (such as those from big banks) and the resulting utilities should be regulated as a Systemically Important Financial Institution (SIFI).

Interestingly enough there seems to be a consensus that the FHFA should remain as is since it is reasonably well run under the direction of Mel Watt. However, the Financial Choice Act contains the expansion of accountability to this agency as well. How these differences will be addressed is yet to be seen.

Overall the last week or so seem to be heralding the resurgence of discussions designed to address many of the issues revolving around the mortgage market that for so long have been silent. This is welcome news to the industry. Now it is incumbent on us to support those changes we see as critical to the overall health of the industry. It finally appears that better days are ahead.

About The Author

Rebecca Walzak
rjbWalzak Consulting, Inc. was founded and is led by Rebecca Walzak, a leader in operational risk management programs in all areas of the consumer lending industry. In addition to consulting experience in mortgage banking, student lending and other types of consumer lending, she has hands on practical experience in these organizations as well as having held numerous positions from top to bottom of the consumer lending industry over the past 25 years.

The Fight Of The Century

In 1975 the “fight of the century” took place in Manila. Known as the “Thrilla in Manila”, Joe Frazier and Muhammad Ali took to the boxing ring to determine once and for all who was the heavyweight champion of the world. As any sports fan knows, Ali won that fight by a TKO decision at the start of the15th round. The fight earned its reputation because of the two men involved. They were both well-trained and totally focused on being champion. Everyone who watched that fight, from then until now, is impressed with the grace and dignity these individuals displayed during each round. Both, many said, were champions regardless of the outcome.

Featured Sponsors:


The mortgage industry is now preparing for the next “fight of the century” as two more well-known individuals face off. No, it is not Deontay Wilder or Tyson Fury. Instead this fight will be between Trump and CFPB Director Richard Cordray. We all knew it was coming if there was a republican victory at the polls last fall, but when it will begin is still open to discussion. One thing is certain however, it will not be a graceful or dignified fight but a dirty, drag down, litigious drama with no clear winner and the only losers will be the mortgage industry and consumers.

Featured Sponsors:

The impetus for this fight began several years ago, as the CFPB was created to stop the abuses of lending companies that lead to the Great Recession. The group was charged with creating new regulations for controlling financial organizations as well as monitoring consumer feedback on these company’s actions as viewed by the consumer. Because of its structure the CPFB had no interest or incentive in listening to or acting on the requests or efforts made by lenders to compromise on the details of these new requirements to make them more amenable or less costly. The examinations conducted by the CFPB resulted in tremendous penalties and fines that appeared to be exorbitant for the violations identified. This abuse of power came to a head with the litigation by PHH against the CFPB. In that case the court ruled not only that the CFPB was incorrect in its determination of the problem, but stated that the actual structure, which required no accountability to either legislative or administrative branch of the government, was basically unconstitutional. The call for significant changes in the organization and the new republican victory created an expectation that the Director would be fired immediately after the inauguration. Director Cordray however gave notice that he will not leave until his term is up next July and if necessary will use the courts to ensure his position. And so, the fight began.

Featured Sponsors:

However, this fight is not between two equally strong opponents seeking to show through their training and skills that they deserve to be called champion. This fight is between two egotistical politicians whose interest is focused only in being the destroyer of the other. Although the republicans and their supporters hope that their fighter, Trump, will win, Director Cordray promises to not let that happen. Meanwhile the Trump administration has shown no sign of, or any eagerness to, get into the action. This fight of the century promises to be longer, nastier and much more contentious than the “Thrilla in Manila”. So, hold on to your seats. This is sure to be very entertaining even if it ends up as a draw.

About The Author

Rebecca Walzak
rjbWalzak Consulting, Inc. was founded and is led by Rebecca Walzak, a leader in operational risk management programs in all areas of the consumer lending industry. In addition to consulting experience in mortgage banking, student lending and other types of consumer lending, she has hands on practical experience in these organizations as well as having held numerous positions from top to bottom of the consumer lending industry over the past 25 years.

Producing Reliable Results


I started running again. Again, because I used to run every day and competed in numerous races. Everything from 5Ks, which is a 3.1 mile run, to marathons which are 26 miles, 385 yards. While I certainly don’t run as fast as I used to, or as far, one thing hasn’t changed, the need to track my times in order to find out how to make myself better. If you are a runner or have ever known one, you know that this is one thing we have in common. We know every data point about our running process. We can tell you how many miles we run a day, a week, a month or a year. We can tell you our average minutes per mile, our best times and even how many seconds we took off our time for every race we run. All of this data is important because it helps us get better, as well as, better prepare for the next race. We use our last achievement as the benchmark for the next and we are obsessed with making sure we have a reliable methodology for meeting those times. More than anything we want to know what to expect when we sign up for the next race or focus on beating our own benchmark.

Featured Sponsors:


While those of you who are not runners may think those of us that are as overly focused on this need to develop a high level of reliability in our results. However, we are not alone. More and more companies, especially those that provide direct consumer services have the same passion. Companies that are in the business of providing call center support are a perfect example. These companies, such as the country’s most outstanding global “customer experience management” provider, track numerous data points on an on-going basis. Each of these data points has an acceptable level of performance which must be met on a quarterly basis. These results then become a critical part of their ability to attract new clients. The reliability of the company to continuously provide service at the expected level is what allows them to charge a higher price for their services. In other words, reliability generates profitability.

Featured Sponsors:

Direct customer contact services are not the only entities that profit from the utilization of a “reliable process” approach. This concept is observable in manufacturing as well. In fact, most of the people reading this article have more than likely benefitted from it. Think about the products you buy. Clothes, cars, food and just about every other purchase you make. Most people will buy multiple times from the same manufacturer if they are pleased with the product they originally bought. For example, my daughter bought a Volvo over 10 years ago. The car has over 150,000 miles on it and is still the one she takes on long distance drives because she knows it will get her to wherever she is going and back. If fact, she is planning to give it to her son when he starts to drive so that she has excuse to buy another one. This is where reliability really pays off. Why? Although a new Volvo has not yet proven it can perform the same, the confidence built up from the steady performance of her old one is the determining factor in her decision to get another, regardless of the cost.

Featured Sponsors:

So what brings about this level of reliability? Is it a special plan or a “secret” approach that some companies have developed? The answer of course is no. It is simply the ability of any company to make sure their operations, the processes which produce their product and/or service are performed consistently. How do they do that you may ask. They do it in the same way a runner manages the consistency of his or her performance. By measuring, benchmarking and comparing.

Whether you are originating loans, purchasing them, warehousing them or servicing them, every function within every organization has a process for doing what they do. Much of this process may be technology driven or may depend entirely on human involvement. It doesn’t really matter.  Where there is an operational process there is an expectation of what will occur as a result of that process. Once that output is identified and the results tracked, you can begin to develop operational reliability.

The term “operational reliability” has become well-known to most industries as the foundation of producing quality products. It means that a company whose products repeatedly perform as expected is much more likely to provide a product that meets your expectations the next time you buy it. In other words, it you contract with the call center company mentioned above, you can expect that your customers will receive the level of service that has been promised and is documented through their data results.

All well and good you say, but I have at least two, if not three customers. There is the consumer who expects strong and consistent support from my production staff as well as providing the information necessary to choose the best loan. There is also the investor who expects that the loans they buy will repay based on the credit risk identified in the credit policies. Both the investor and consumer expect that servicing operations will effectively handle payment processing, the associated activities and, if necessary, manage the foreclosure and REO process. Finally, there is the warehouse lender who expects that the loans will be purchased in a timely manner and will not stagnate on the line or have to be repurchased.

Developing operational reliability.

Most companies have more than one customer with different needs and expectations. However, when we drill down into what each of their differing customers want, the answers are all consistent. All of these customers expect to receive what you have committed to provide. In order to do so you must ensure the operational reliability of your entire operation. So based on these expectations, how does one go about The most logical place to begin is with the operations themselves. Start by identifying all the processes that go into “manufacturing” the mortgage loan. Of course the first operation is the contact between the consumer and the loan officer. What is your process for making this happen? When the loan officer meets with a potential customer, what is supposed to be the result? What are the inputs the loan officer provides? What are the expectations from the consumer? What is the final output supposed to be?   What actions and/or activities produce that result? Are these expectations documented and measured for at least a sample of loans originated for each loan officer? Once you have collected the data you can use it to identify where the process is working and where it is not. Unfortunately, many times lenders fall prey to the belief that these activities cannot be measured and use this as excuse for not attempting to monitor this piece of the process. However, this can be done. Other industries, such as call centers have done it.

The next set of operational activities include the decision-making and closing steps. Here, the actions taken are many times reviewed by others, such as QC or senior managers. Unfortunately, these measurements are not focused on whether or not the operations we perform actually support the customers’ expectations. For example, let’s look at the credit underwriting guidelines. Nowhere in these guidelines are there statement regarding how the loans produced using them will perform. We recognize that there are many different standards for underwriting, based on the risk appetite of the organization. The operational reliability of this process is making sure those are guidelines are followed and if an exception to them is warranted, it is properly documented and tracked. Tracking exceptions to guidelines is much more important than just having documented them. Imagine if an exception to a guideline occurred 35% of the time and this exception was tracked and found to have no impact on the performance of the loans. What could that mean to the purchaser of those loans? How could that impact the guidelines and streamline the operations of underwriting loans in the future?

Servicing, with its multi-faceted process is ripe for reaping the rewards of operational reliability. Since the mortgage crisis they have been inundated with new consumer requirements, especially when it involves interacting with the borrower. These actions, both on a service provider level and a collection process operational standard could use the operational reliability measurements that have been developed by numerous call center operations. In addition, the CFPB has developed a set of standards that are expected to be met by servicers. Yet how often does any specific servicer meet them? We don’t really know because there is no benchmark that covers servicers. Maybe the operational reliability standards set by CFPB are excessive? Maybe with the level that can be reached is lower due to the associated operational processes? How can any servicer demonstrate that the operational reliability of their processes actually meets consumer demand? None of these questions can be answered because unlike other industries, this benchmark is absent.

A recent item in one of the industry trade journals suggested that servicers are at the breaking point; that if they are required to continue to meet all these requirements their operations will implode. The writer of this article basically blamed it on the outdated technology. However, If this is in fact the case, why haven’t servicing managers identified operational practices that are failing, measured the failure rate and looked internally to change the operations. Instead many have implement manual reviews that are too little and way too late. These reviews tend to identify specific loan issues rather than the operational failures that produce unreliable results. The results should also provide more than just a dump of data but instead should provide an in-depth understanding of what a process is supposed to produce and the rate it actually produces that result.

Most lenders pay close attention to their warehouse lines but not to measure operational reliability. Instead they are reviewing the number of loans and the days these loans have been on the line in order to avoid interest penalties. Imagine however, if the origination operation was so reliable that the concern over excessive interest payments was not an issue. If the reliability of the product was such that the loans were always purchased timely. Could that result in lower interest rates from the warehouse lender? Now we will never know because the operational focus on this process is on a lack of reliability rather than on how consistently the operation works.

Managing for reliability

The CFPB requires in its statement of expected organizational management, that every company will have a “CMS” or compliance management system. Too frequently organizations see this as a mandate to ensure compliance with regulatory requirements. What it is really saying is that lending companies must have a system in place to ensure that what they say they are providing is what is actually occurring. This involves having data that updates all operational activities on a regular basis and in a meaningful format. If, because the data is not collected or not collected accurately, the results could lack the reliability necessary to make management decisions.

Unfortunately, most senior managers do not have the information they need to make effective decisions. While they get reports on volume, profit and/or problem areas, they have nothing to allow them to reconcile this issues with the overall operation of the organization. Getting a QC report that says a total of 1% of the loans had a “significant default” does nothing to help them understand the underlying operational process that caused the problem, the severity or impact of the issue and the priority of making management decisions on addressing the problem. Most of the time these reports rather than providing assurance to managers, make “mountains out of molehills” and result in wasted time and energy trying to fix random problems. In the desperate attempt to really understand what is going on in their organizations they demand more and more reviews and reports which only succeed in confusing issues and increasing operational costs. Despite the fact that attorneys at recent conferences pushed the need for companies to know more about their organizations than the regulators do this cannot happen if the data collection is focused on the wrong issues or is not statistically sound.  In particular managers must know not just such things as production counts or the turn times for handling complaints, but how likely they are to have the issues that are seen as a problem reoccur. This information comes from reliability measurements.

At the end of the day, organizations that want to produce reliable products and services, must identify, measure and report all facets of their operations. This means not only including every activity focused on the outcome, but having a means to measure its effectiveness and analyzing these results in a way that not only makes the operation efficient, but also ensures that the products/services produced are worth the highest possible price. This is what operational reliability is all about.

About The Author

Rebecca Walzak
rjbWalzak Consulting, Inc. was founded and is led by Rebecca Walzak, a leader in operational risk management programs in all areas of the consumer lending industry. In addition to consulting experience in mortgage banking, student lending and other types of consumer lending, she has hands on practical experience in these organizations as well as having held numerous positions from top to bottom of the consumer lending industry over the past 25 years.

Coming Soon: The Next Mortgage Crisis


Near the beginning of the American classic movie “It’s a Wonderful Life” Jimmy Stewart and Donna Reed are dancing on a gym floor that, unbeknownst to them, opens up to a swimming pool. Unaware that the periphery of the floor was slowly receding beneath them, they keep dancing, heading closer and closer to a steep drop into the water, all the while ignoring the shouts of others telling them to stop, that there was danger ahead. As would be expected, they fall off the edge and into the unknown waters beneath.

Featured Sponsors:


Ten years ago the mortgage industry, dancing with Wall Street, was in the same position as Jimmy and Donna. Just like these two we kept dancing, despite the warnings being given, until we fell off the edge into the worst financial devastation seen in recent times. This Great Recession resulted in thousands of people losing their homes, hundreds of companies going bankrupt and the reputation of mortgage lenders destroyed. The repercussions were swift and severe. The Dodd Frank Act was passed and the Consumer Financial Protection Bureau (“CFPB”) was created. The regulations and examinations emanating from this body has placed previously unregulated independent lenders under their control, restricted credit, created new disclosure standards and documents that have cost millions for both production and servicing operations and generated fear of examinations, along with penalties, that have reached millions of dollars. Most surviving lenders, especially the larger banks, have been inundated with lawsuits from numerous private investors who trusted Wall Street and lenders representations about the loans contained in numerous RMBS deals. These lawsuits have cost the industry billions.

Featured Sponsors:

Of course we have heard over and over again the underlying reasons for this debacle. It was the administration’s efforts to meet homeownership expectations that produced the bubble. Or was it in fact, the creation of new “easy qualifying” products, lower interest rates, the run up in housing prices, the expansion of subprime or the excessive appetite of investors, including Fannie Mae and Freddie Mac, to earn large returns on investments that were rated triple “A”? We now recognize that it was all of these in some combination that fueled the fire. However, in conversations with numerous executives, underwriters and loan officers it actually came down to one simple thing: greed. Once the profits starting rolling in and practices like stated income and pay options became part of lending, the only thing that mattered was making sure “nobody else did a deal I could have done.”

Featured Sponsors:

Now that we have “paid the price” for the excesses of the 2000’s, but have we learned our lesson? Right now the economy is once again growing, interest rates continue to be low, and housing prices are rising. So, are we setting ourselves up for another jump off the cliff. There are many on all sides of the issue that say the new regulations and the changes made by Fannie Mae, Freddie Mac and FHA are sufficient to prevent another crisis. Yet others believe that the CFPB must continue its diligent control over the industry to prevent another disaster.

To really understand the probability of a similar situation occurring we have to look at what we as an industry have done to prevent such a reoccurrence. We cannot continue to justify similar actions because of the interest rates, or investors, or any of the other myriad of drivers that came together to create the loan origination bubble that burst in 2007. We must recognize that at the very center of the problem, it was our lack of control over the people, processes and technology involved in loan production and servicing that were at the very heart of the collapse. So, where did we fail? A root cause analysis of the operational risk issues underpinning this failure can tell us what controls were lacking and help the industry understand what has to be done to prevent it from happening again.

Root Cause Analysis

A “root cause” analysis is part of a quality management program. Its focus is on identifying the issues that are embedded into any process where the results are unsatisfactory. This type of analysis is also known as a fishbone diagram since it is used as the most common way of delving deeper and deeper into a broken process. Beginning with the obvious problem, sub-processes that contribute to the end result are identified and connected to the problem. Once completed, each sub-process is examined to see which one(s) is not working correctly. From there the analyst can make recommendations as what needs to be “fixed” if the outcomes of the process are to produce as expected. Performing this analysis tells us a great deal about where our operations failed.

If we work from the expectation that mortgage loans are produced with the expectation that they will perform and consumers will get the information they need to understand their repayment responsibilities, the following list of issues must be classified as critical failures that led to the collapse.

1.) Believing that technology was the critical control point in approving loans.

Although the industry has had technological support in completing its tasks since the early 80s, it was the introduction of automated knowledge based work (i.e. underwriting) that destroyed one of the most critical parts of the process. Despite automating steps in the process that relate to documenting the collection of information, that retrieves information and generates documents, it is still human analysis and understanding that generate the most effective decisions. When first introduced, automated underwriting systems were heralded as the answer to easing the chokepoint of underwriting, but only for “plain vanilla” applications. This however soon changed as more and more programs were introduced and the use of and payment for these programs became more competitive. At the same time the process failed to address the issues associated with the data that was input into the system and the probability that if incorrect or inaccurate data was input, the results would be bad as well.

During the booming days of the mid 2000s, these programs were used and abused by every lender. It is important to remember that these programs were not neural networks that “learned” from each loan underwritten, but were merely rules based engines. In other words, garbage in resulted in garbage out.

In reviewing numerous files generated by investor lawsuits it is all too common to find income input that was miscalculated, copied incorrectly or just plain fraudulent. This was also true of assets and other areas focused on underwriting. Often there were numerous AUS reports in the files that were obviously rerun with different information until such time as an approval was generated. This result was then used to approve the loan.

Another issue with this technology was the failure to complete the information about the application required by these AUS approvals. All too frequently the output of the system would require additional information or provide direction on requirements to be met prior to closing. These were rarely found in the file. Furthermore, when these programs referred these loans to underwriters, rather than analyze the issues they were just approved; most with no compensating factors identified. Whether you believe this was due to pressure from loan officers and production managers or just plain bad underwriting, the end result was that far too many loans were approved that should not have been.

An unintended consequence of this technology was the reduction in human underwriters who have been trained in the effective ways to analyze credit risk. Because of the volume and pressure, most of the individuals generating underwriting approvals were mere point and click junior processors. They didn’t understand the impact of bad data or the potential impact of inaccurate results. Unfortunately, the data from these poorly underwritten loans was used to generate data that was used in the secondary market for selling the loans. When this sales staff generated information showing DTIs averaging 40%, they believed that the number was accurate. They didn’t know, and never asked, if these number were correct when in fact many times, and in some cases, as many as 65% or more, were not.

2.) Failing to control counterparty and third party risk

There are no systems, or very few anyway, that are closed end systems. In other words, every system needs inputs and produces outputs that can be used in another, or greater system. This is true for mortgage origination and servicing. However, when we take the outputs of one system and introduce it into another, we open that system to risks that may not naturally be present. When we provide our outputs to others to help us address risks, we open ourselves up to their risks as well. The only way these risks can be managed is through a solid operational risk management program. The traditional approach to control these risks is through a vigorous selection and monitoring system. Unfortunately, the processes necessary for controlling these risks were inadequate or not present at all and these control points failed.

In conducting our root cause analysis of this process it becomes very obvious why. The original controls established made sense and were focused on doing business with only those companies that met the strictest criteria. In addition, the system was designed to evaluate the output they delivered to ensure only products meeting each company’s standards were accepted. This should have resulted in effectively managing the risk.

Unfortunately, these controls, while existing on paper, were not followed or enforced by senior executives. Many times loans from brokers or other lenders were found to contain false information yet these lenders and/or brokers were not eliminated from the program. All too often the refrain of “but they couldn’t be committing fraud, they are one of our best clients/producers” echoed through executive conference rooms. These cries were not based on the quality of the products themselves but on the amount of revenue they generated for these loan officers or account execs as well as for the company. If at any time an underwriter or quality analyst questioned the decision for maintaining a relationship with any questionable supplier, they were repeatedly told that they were not being a “team player” and it was for the betterment of the company that the relationship would continue.

The fact that these risks were not seen as real “risks” was evident in the testimony provided to Congress in the hearings that followed the collapse. The phrase “originate and sell” became synonymous with the failure of the loan origination process, which ignored the third party risks and the controls which were to be in place.

3.) Total Failure of Quality Control

Without a doubt, this was the most significant and devastating failure of the entire crisis. This failure, more than any other, drove the acceptance of loans that were clearly fraudulent and/or inconsistent with the risks defined by credit policy. When as many as 90% of the loans in any RMBS fail to meet the guidelines they are represented to meet, the production process is clearly out of control. Yet none of the quality control processes that were supposed to be in place, such as the review of files prior to purchase by correspondent lenders, the standard post-closing program and/or the due diligence process were effective in preventing or controlling the massive amounts of exceptionally poor quality loans from being sold and included in RMBS deals.

The responsibility for this failure rests completely and solidly on the shoulders of Fannie Mae, Freddie Mac and FHA. These entities have been responsible for dictating to lenders how the post-close QC process was to be conducted since its inception by Fannie Mae in 1985. Even though it was founded on antiquated “auditing” techniques that provided no operational assessments, any effort on the part of companies and/or individuals to change these requirements were met with a stone wall of resistance.

The potential failure of the program was clearly evident beginning with the fact that the methodology was based on what these agencies wanted to review rather than allowing lenders to focus the program on their own risks. Furthermore, the sampling programs “approved” by these agencies were so flawed that it was impossible to conduct any type of reliable analysis. In addition, the process of “rebuttal” of the findings resulted not in better results as expected, but actually resulted in hiding critical issues. Adding insult to injury, the results were not even presented to management until 90 days after the origination process had occurred and the findings themselves were nothing but a data dump that provided no direction to management what so ever. In fact, management itself would redesign the results and reports so that any potential investor could easily be misled into thinking the origination process was working fine. In other words, these programs left lenders operating blind to the risks within the organization and the process.

Additionally, the individuals actually conducting the reviews and running the programs had no educational background in how a proper QC program was to be conducted. Companies, including Wall Street firms, blindly followed these agency requirements, knowing full well that the results were failing to produce any type of information that would identify negligence and improper origination processes. Since they could originate and sell why worry, the issues were someone else’s problems, until they weren’t.

Yet there was one last place to stop these corrupt loans from being included in securities; the due diligence review. Unfortunately, this process was, if possible, worse than QC. Here the sellers of the loans basically dictated to the companies conducting the reviews what they should be looking for and what issues were not of concern. These standards were followed despite the fact that reviewers could find massive amounts of fraud and consistently poor underwriting in the files. After all, if they found these issues, the seller would never again request their services for a due diligence review. Interestingly enough, in trying to demonstrate a much stronger analytic focus on loan quality to one Wall Street firm, they rejected the idea since they had (according to the analytic staff) a much better program. That company was Lehman Brothers. Guess it just wasn’t so!

What has been done?

Based on the results of this analysis a review of changes that have been made and/or imposed on mortgage lenders must be conducted to see if, in fact, the underlying causes of the crisis have been adequately addressed. The review of today’s practices and policies identified some.

1.) The implementation of strong operational controls and executive management accountability for operational excellence. 

While the CFPB has incorporated the requirement for an effective management process as part of its expectations, most companies have spent little time or focus on what this actually means. This situation could be the result of the need to address the numerous other regulations from the CFPB or it could be that executives actually believe they are already meeting this standard. Based on management statements and actions exhibited today it is readily apparent that the latter is more likely the case. However, it is also evident that many of these older “originate and sell” leaders area now exiting the industry and there is hope that the new leaders will be focused on operational excellence.

Probability the issue will create another crisis soon: high

2.) Use of technology

While the use of AUS systems will undoubtedly continue, it is apparent that the remaining systems are being reviewed and updated. Changes in the way credit is evaluated was recently announced and is expected to be implemented in September. This however does not change the risk emanating from the lack of knowledgeable, experienced underwriters creates. There is still a critical need for people with credit analytic knowledge and experience.

In other operational areas, systems such as LoanLogics HD, are being heralded as having the ability to filter loans from correspondent lenders so that emphasis can be placed on reviews of poor quality execution. In addition, lenders are being to recognize that external vendor products may in fact alleviate some of the more specialized areas of risk such as regulatory. Having these programs run standard comparative reviews and isolating the problem loans also allows for specialization in the review process.

Probability the issue will create another crisis soon: low

3.) Control of third party risk

Although loans continue to be sourced through third parties, the processes for approving, monitoring and terminating unacceptable partners has improved significantly. Lenders are using technology to support these efforts and are less welcoming to those that show any activities toward processes and/or programs that reflect the previous proclivities found in unacceptable loans.

This of course does not mean that the risk is gone, only that it has been minimized by stronger operational controls and technology. The industry must stay vigilant that their third party relationships are carefully monitored and alert for any signs of deterioration in loan quality. Consistent with that is the increasing focus of these management groups to develop stronger reporting tools and standards by which they can measure and compare their lender/broker base.

Probability the issue will create another crisis soon: medium

4.) Effective Quality Control and Due Diligence programs

Despite the fact that the agencies have made several changes and added words reflecting modern QC techniques, today’s dictated quality control programs continue to result in an abysmal failure. In reality, they have done nothing that fixes the problems with the previous program. What they have done instead is create a stronger base on which to reject loans that they don’t like without providing direction to lenders on how to utilize these new requirements. Basically what they have done is similar to a piano teacher showing a student some basic key strokes and scales and then placing Beethoven’s 9th Symphony in front of them and saying “now play and if you get it wrong you will be punished”.

While the basic flaws in the program are many there are some that stand out like a sore thumb. For example, they have embedded the terms “Quality Manufacturing” into the program along with the use of the term “defects”. From here they identified what they believe to be defects that lenders must identify in their reviews. Next they must calculate these “defects” to arrive at an “error” rate. Anyone who has any knowledge of actual quality management principles knows that this is wrong.

A.) Defects are process failures that impact the actual performance of the product. When questioned about how they determined the defects were identified as impacting loan performance they responded that “someone must have done it.” Once again lenders are faced with focusing on the issues Fannie, Freddie and FHA are worried about, not their own concerns.

B.) Calculating error rates is another flaw. Quality management and Six Sigma calculate error rates as the percentage resulting from dividing the number of errors by the number of opportunities for error. For example, if there is one possible defect per loan and 100 loans of which 10 have that error, then your error rate is 10% or 10 divided by 100. The complex identification of defects and the equally complex classification system that each lender has to develop leave the results open to manipulations and errors.

C.) The reporting requirements are focused on providing trends to management. However, the flawed sampling parameters, the lack of understanding of statistical control methods, ranges of variation and random errors means that any trending reports are so inaccurate they are meaningless.

D.) The pre-funding review process is a joke. Since they do not understand QM at all, the agencies do not comprehend the uselessness of these inspections. Management on the other hand, and rightly so, see these as another expense with no value added. Rather than try and “catch” mistakes on individual loans, the effort would be much better spent on analyzing the process.

E.) Conducting a “root cause analysis” on each error. This leads to nothing but wasted effort due to the totally flawed sample and review requirements. For example, one lender recently found 11 files with the same underwriting issue but rather than attacking this as a process problem, they conducted a root cause analysis of every one of the findings. Of course, none of the results were the same. At the end of the day these changes have really done nothing but add cost and wasted effort.

Probability the issue will create another crisis soon: extremely high

So what have we learned from this analysis; will there by another crisis in the not so distant future. The answer to that is “probably.” After all, the obvious and overstated issues of the 2007 crisis either still exist and others, such as “no doc” loans, and “stated income” are beginning to re-emerge. Private investors are once again peeking out of the rabbit hole they ran down in 2009 and 2010.

Unfortunately, while there are new requirements in place, many of the truly needed operational controls are still in the hands of senior executives. To prevent even the threat of another crisis these individuals must take on the accountability for something other than profitability within their organization. This means leading initiatives that develop industry wide operational control standards and benchmarks for regulator and investor expectations. As a result, the industry would have comparative analytics to determine conformance to requirements and quality excellence.

They must also individually develop quality control programs based on their own risks and controls to ensure they are getting reports that provide meaningful and useful direction on producing quality products. We must break from the chokeholds that are agency requirements and individually determine what is best for each of us.

Lenders and investors alike must recognize the operational risks in every process that increase the likelihood of financial impact and establish a means to account for it financially. Whether it is a higher price for product excellence, a cost benefit for improvements or an ROI measurement, until this is built in these risks will just be seen as a “cost of doing business” and will continue to increase every year.

Overall our analysis shows that the probability is much higher than it should or could be if executives focused on the operational risks within their organizations. Whether the industry leadership along with each individual executive, take these issues to heart and institute strong operational controls is still an open question. Hopefully it will be answered in our favor.

About The Author

Rebecca Walzak
rjbWalzak Consulting, Inc. was founded and is led by Rebecca Walzak, a leader in operational risk management programs in all areas of the consumer lending industry. In addition to consulting experience in mortgage banking, student lending and other types of consumer lending, she has hands on practical experience in these organizations as well as having held numerous positions from top to bottom of the consumer lending industry over the past 25 years.

Mortgage Lending Is Suffering From Paradigm Paralysis

In researching material for topics of discussion I came across a phrase that really hit the proverbial “nail on the head.” This phrase was “paradigm paralysis.” What you ask, is paradigm paralysis and why was it such a revelation? The answer to those questions are simple.

Featured Sponsors:


Paradigm paralysis is the condition where something, where an individual, business or culture, has an expectation of about how something should be done and something new is introduced that falls outside that pattern, we find it hard to see or accept. In other words, the actions that need to be taken to change accepted patterns are just not taken, even if these actions would produce a better result. Be it individuals, businesses or cultures, we are paralyzed. The reason it struck me as so significant is that this is what has occurred in the mortgage industry and most particularly in the adaptation of operational risk and the redirection of the quality control process.

Featured Sponsors:

The basic mortgage process of evaluating the borrowers and collateral has been around since its inception in the 1930s. However, over the past thirty-five to forty years, it has expanded significantly with the focus on marketing these loan products through capital markets. In 1985, Fannie Mae introduced the first quality control requirements for lenders selling loans to them. The other agencies quickly followed. However, the paradigm they followed when doing so was one of loan inspections with reports providing a data dump that was supposed to help management resolve loan level problems.

Featured Sponsors:

Because of the mortgage crisis that occurred in 2008 and the subsequent regulatory explosion, it was anticipated by many that we would see significant changes in the mortgage lending process. While there has been an escalation of technological support since that time, changes to help address some of the regulatory issues and some minor adjustments to industry processes, including quality control, there has not been any real progress in any of the process paradigms.

Of particular interest to me is the critical changes that are needed in QC. Without a doubt, the quality control standards in place during the build up to the crisis were a critical and significant factor in the collapse that followed. This was not because they weren’t followed but because they were inadequate and antiquated. Yet during this same time period, other industries in the US and around the world were implementing new quality control standards as part of an overall operational risk program. Once implemented, these programs focus on operational process analysis and provide a method to effectively price for the risk of poor controls and processes.

So, why hasn’t the industry moved in this direction? Unfortunately, we continue to have paradigm paralysis and have failed to make some progress in this critical segment of mortgage lending. Until such time as lenders are willing to take responsibility for the management, control and monitoring of the products they produce, the industry will continue to suffer. One can only hope that leaders of the industry will become brave enough to challenge this agency-caused paralysis and make a change for the better.

About The Author

Rebecca Walzak
rjbWalzak Consulting, Inc. was founded and is led by Rebecca Walzak, a leader in operational risk management programs in all areas of the consumer lending industry. In addition to consulting experience in mortgage banking, student lending and other types of consumer lending, she has hands on practical experience in these organizations as well as having held numerous positions from top to bottom of the consumer lending industry over the past 25 years.