Posts

How To Make Your CRM Big Data Small

Small and midsized (SMB) businesses love to think big, and there’s no better way to do that than with the right customer relationship management (CRM) technology. The operative words here, of course, are the right.

As the sheer volume of customer information captured through CRM continues to increase, businesses must evaluate whether they can truly capitalize on the valuable data their CRM software delivers. And so, it’s important to ensure that your CRM is designed with an SMB business needs in mind, since the tools and data that large corporations use might be of minimal use, irrelevant or even holding your SMB business back.

Featured Sponsors:

 

 
Instead, find a CRM that can scale to your business’s customer data needs by following a few guiding principles.

Start with the basics

There’s a wealth of invaluable data that can be gleaned from today’s CRM technology. Businesses can aggregate information about customer demographics, pain points, organizational objectives, timelines, contact preferences and much, much more. But the truth is, SMBs are often best served by taking a more minimalistic approach to their CRM strategy.

Featured Sponsors:

 
It’s important for new CRM users to begin with the essentials before working their way up to more complicated features and data points. Otherwise, they may find themselves drowning in a sea of customer data they are not prepared to interpret or make actionable.

Commence your CRM journey by establishing what your business’s basic needs are. Are you looking to track sales? Is customer feedback a top priority? What are your customers’ needs? Sticking to the essentials will help ensure your CRM implementation’s efficiency, ease of adoption and therefore effectiveness.

After you’ve mastered the basics, you can drill down further.

Focus on areas that benefit you most

Many CRM programs offer a daunting and seemingly endless assortment of complicated data input screens. But for SMBs, this data overload can be both overwhelming and superfluous. Such businesses are often better off focusing on the areas that offer them the greatest benefits.

Featured Sponsors:

 
According to SoftwareAdvice.com, managing contact info, tracking interactions, scheduling and email marketing are among the other top priorities for SMB. If you do these basics well, you’ll be well-served through CRM and it will help you strengthen customer interactions. Nearly 75 percent of small and midsized businesses using CRM have reported improved customer relationships.

Take advantage of new technologies

Today’s CRM technology includes features such as voice interaction, predictive analytics and various other forms of artificial intelligence (AI) functionality. The right recommendation engine, tailored specifically for the SMB market, can provide a significant value to your business.

These technologies serve an important role in helping boil down the big data captured by CRMs to what is most relevant and actionable for your business. Though the majority of small businesses have been slow to embrace AI-capable CRMs, those taking advantage of these features are reporting significant benefits.

There’s no one size-fits-all CRM strategy for capitalizing on the big data available for today’s companies. SMBs have a unique set of needs that differ greatly from those of your large enterprise size competitors. The CRM an SMB chooses to serve those needs should reflect this reality; after all, if their needs aren’t the same, the strategy shouldn’t be either.

About The Author

Analyzing The Election

website-pdf-download

A Forbes article by Jonathan Vanian said it best, “Prior to Trump’s upset win, virtually all national polls showed the businessman and reality television star trailing Democratic nominee Hillary Clinton. Her win was considered inevitable, with prominent pollsters and pundits merely arguing about how big her guaranteed victory would be. And then on Tuesday, voters proved the experts wrong.”

What did Trump do right? To say he ran an unconventional campaign would be a gross understatement. He methodically eliminated 16 primary contenders, countering their talking points with his bombastic personality. He captured an inordinate amount of free TV time by being outlandish. Trump had a sense of what people wanted to hear and recognized that anger among working class white voters ran deep. He played to emotion, not data points.

Featured Sponsors:

 

Steven Bertoni’s article headline in the December 20, 2016 in Forbes magazine read, ”The secret weapon of the Trump campaign: his son-in-law, Jared Kushner, who created a stealth data machine that leveraged social media and ran like a Silicon Valley startup.” Kushner, who had no political experience, committed to Trumps’ campaign in November 2015, after seeing a raucous Trump rally in Springfield, Illinois. On that return trip, Trump and Kushner talked about how the campaign might better use social media.

”At first Kushner dabbled, engaging in what amounted to a beta test using Trump merchandise. ’I called somebody that works for one of the technology companies that I work with, and I had them give me a tutorial on how to use Facebook micro-targeting,’ Kushner says. The Trump campaign went from selling $8,000 a day worth of hats and other items to $80,000, generating revenue, expanding the number of human billboards—and proving a concept. By June the GOP nomination secured, Kushner took over all data-driven efforts. Within three weeks, in a nondescript building outside San Antonio, he had built what would become a 100-person data hub designed to unify fundraising, messaging, and targeting.”

In this case, a lack of awareness of traditional campaigning was an advantage. Kushner was able to look at the business of politics without the constraint of precedent.

Featured Sponsors:

Eric Schmidt, the former CEO of Google and one of the designers of the Clinton campaign’s technology system, agrees with Vanian. “Jared Kushner is the biggest surprise of the 2016 election. Best I can tell, he actually ran the campaign and did it with essentially no resources.”

What did Clinton do wrong? According to The Washington Post, Clinton’s campaign used a custom Algorithm called Ada; a complex computer algorithm that staff fed “a raft of polling numbers, public and private” to play a role in most strategic decisions Clinton aides made. According to aides, Ada ran 400,000 simulations a day and a report was generated that gave Robby Mook, the campaign manager, a detailed roadmap of which background states were most likely to tip the race in one direction or the other, allowing them to decide where and when to send the candidate and her surrogates and where to air television ads.

Like much of the political establishment, however, Ada did not accurately predict the turnout of rural voters in Rust Belt states. Pennsylvania was correctly identified as a critical state early on, which explains why Clinton visited it often and closed her campaign in Philadelphia. Other states that Clinton would lose, like Michigan and Wisconsin, either were not identified as at-risk or were deemed so too late.

A number of election post mortems indicate that Bill Clinton, a politician with proven fluency in reading and responding to voter emotion, advocated that his wife’s campaign pay more attention to white working class voters. Perhaps he reasoned that while that group was not within Clinton’s reach, she might draw enough votes to win Michigan, Pennsylvania, and Wisconsin—states that Trump narrowly won instead.

Let’s look at data analytics as it pertains to a presidential election: An article in Wired by Cade Metz stated, “The lesson of Trump’s victory is not that data is dead. The lesson is that data is flawed. It has always been flawed—and always will be…. But this wasn’t so much a failure of the data as it was a failure of the people using the data. It’s a failure of the willingness to believe too blindly in data, not to see it for how flawed it really is.”

Featured Sponsors:

Summary: The use of data analytics by presidential campaigns did not begin in the 21st century. Clinton aides believed their work with data was the most sophisticated to date, and while this may be true, it did not translate to a strategic advantage over Trump when all other factors were accounted for. If Barack Obama’s 2012 presidential victory proved big data’s triumph for accurately predicting elections, Donald Trump’s 2016 presidential win could demonstrate the opposite.

According to Nik Rouda, senior analyst at the Enterprise Strategy Group, “Polls aren’t really big data. The sample sizes were certainly good enough for a poll, but maybe didn’t meet the definitions around volumes of data, variety of data, and historical depth contrasted against real-time immediacy, machine learning, and other advanced analytics. If anything, I’d argue that more application of big data techniques would have given a better forecast.”

Professor Samuel Wang, manager of the Princeton Election Consortium, which gave Clinton a 99 percent chance of winning as of the morning of Election Day, stated ”The incorrect forecasts don’t appear to be a problem with the margin of error, the polling resulted in a systematic error. The entire group of polls was off, as a group. This was a really large error, around 4 points at presidential and Senate levels, up and down the ticket.”

Wang went on to say he is still evaluating the results. Late candidate selection by undecided voters may have impacted predictions. Whether predication models can better account for such last-minute decisions or changed minds remains unknown for now.

As the world makes the Internet its primary means of communication, we will be confronted with even more data—so-called “Big Data.” On the Internet, fact, opinion, idle chatter, and humor run together in a common sea where intent is difficult to ascertain and the bots are becoming more indistinguishable from the humans. The old database maxim of “garbage in, garbage out” should guide all efforts to incorporate this data in predication models.

And in the meantime, the even bigger promise is that artificial intelligence will produce more reliable predictions. But even the most sophisticated artificial decision engine remains dependent on imperfect data inputs. Neural networks can’t forecast an election without data—data that is selected and labeled by humans. While such AI systems have become adept at object recognition because people have uploaded millions of photos to places like Google and Facebook already, we lack the same kind of clean, organized data on presidential elections to train neural nets.

Conclusion: Are your data analytics predictions models suffering from the same problems as the models that predicted Hillary Clinton would easily win the U.S. presidential election? Next month we will explore this further.

About The Author

Exploring Big Data

A new white paper, written by a trio of industry experts with many decades of experience between them, gets to the heart of the challenges enterprises are encountering as they work to successfully implement big data initiatives. The 43-page document explains how big data can be a turning point for many, a savior, to some, from the dogmatic prescriptions frequently used to address challenges. However, it can also lead many to economic and career-ending failures, hence a false prophet.

“No two operations are alike and big data is currently not an off-the-shelf solution,” states co-author Cary Burch, Corporate SVP Innovation with Thomson Reuters. “Big data is a growing discipline used to integrate and analyze very large, multifaceted data sources. The integration of intelligent information within a big data solution requires advanced planning, robust data sources, and a clear linkage to discrete performance measures to ensure continued funding.”

Mark P. Dangelo, President, MPD Organizations and co-author adds, “Once the playground of mathematicians and computer scientists, big data is growing beyond its infancy and into a global industry. However, it is being sold as an elixir for pervasive enterprise problems. It will be a common practice when the reality of big data and its data abstractions move beyond faith or R&D and into mainstream, repeatable solutions when routinely measured against corporate success criteria. After all, innovation is not just a trend or a fad—if properly framed, big data represents the next curve of business innovation.”

“Big data has become so sexy and cool that everyone wants to sound like a strong proponent,” writes preface author Ron Ashkenas, Senior Partner, Schaffer Consulting. “If a CEO asks his technology team whether or not they are ‘doing big data’ the politically correct answer is ‘of course we are.’ Yet, big data is still in its infancy. Before any fad becomes a fundamental it needs to demonstrate value, not just once, but over and over. Big data has the potential to make a huge impact. But as the authors of this paper argue, let’s be cautious about declaring victory prematurely.”

The authors describe in detail how initial big data successes are ignoring critical organizational, personnel, and process requirements needed for repeatability—the hype and expectations are grandiose fed by media reports and vendor claims. Moreover, big data is an emerging business innovation, which must be iteratively incorporated into enterprise capabilities, while navigating complex, multi-faceted data volumes, velocities, varieties, and veracities. For a copy of the paper, visit http://www.mpdl2c.com/big-data-series.html.

About The Author

[author_bio]

Tracking Down Big Data

You Can Download This Full Article As A PDF HERE

According to IBM, to gain the competitive advantage that big data holds, you need to infuse analytics everywhere, make speed a differentiator, and exploit value in all types of data. This requires an infrastructure that can manage and process exploding volumes of structured and unstructured data—in motion as well as at rest—and protect data privacy and security. Big data technology must support search, development, governance and analytics services for all data types—from transaction and application data to machine and sensor data to social, image and geospatial data, and more.

Your infrastructure must capitalize on real-time information flowing through your organization. It must be optimized for analytics to respond dynamically—with automated business processes, better agility and improved economics—to the increasing demands of big data. To protect your reputation and brand, your platform must comprise stringent policies and practices around privacy and data protection, safeguarding all of the data and insights on which your business relies. The right platform instills trust, so you can act with confidence. It controls how information is created, shared, cleansed, consolidated, protected, maintained, retired and integrated within your enterprise.

To achieve economies and efficiencies, you must run certain analytics close to the data, while it is in motion. But for data you elect to store, your infrastructure must embody a defensible disposal strategy that reduces the run rate of storage, legal expense and risk. As you infuse analytics into your organization, data security becomes more central to your competitive advantage profile. Your infrastructure must have strong security measures built in to guard your organization against internal and external threats. To relieve the pressure that big data is placing on your IT infrastructure, you can host big data and analytics solutions on the cloud. Achieve the scalability, flexibility, expandability and economics that will provide competitive advantage into the future.

Do you know where to find the Big Data in your organization? The following infographic by Kapow can help you locate and identify the various types of Big Data in your organization.

Common categories of pools of data are archives, documents, media, business apps, social media, public Web, data storage, machine log data, and sensor data. The infographic also lists various data sources or types that make up those categories.

For example, archives might consist of scanned documents, statements, insurance forms, medical records, customer correspondence, paper archives, and “print stream files that contain original systems of record between organizations and their customers,” states Kapow.

To find out more about the various kinds of Big Data that might be available to your organization, check out the following infographic:

variety-of-big-data-sources-kapow-software-infographic

Partnership Looks to Help Lenders Automate Strategically

Global professional services firm Alvarez & Marsal (A&M) has formed a strategic alliance between the firm’s Real Estate Advisory Services practice and METIS Financial Network, offering clients the benefit of A&M’s extensive real estate valuation, operational and consulting industry expertise, and proprietary tools, alongside METIS’ highly adaptable, scalable data, document and workflow solutions. Here’s how everything comes together:

The alliance enhances the best-in-class, value-added services and technologies available to financial institutions, private equity firms, and other sophisticated investors who look to A&M and METIS for real estate transaction support and valuation management.

Floyd W. Kephart, Chairman of METIS, stated “the breadth and depth of A&M’s real estate consulting practice, together with our unique customizable technology, provides a paradigm shift in how real estate information is transformed into actionable, value-enhancing strategies.”

METIS provides platform technology that automates key elements of a client’s process to enhance productivity, improve transparency and streamline communication between all stakeholders in the process. “We differentiate ourselves by listening to our client and understanding their workflow, strategies and business goals in order to develop and implement a tailored solution that addresses their unique needs. Our technology not only converts “big data” to user-focused data quickly, but produces key analytics and market information critical for effective management and decision making”, said Joy Hou CEO of METIS.

Privately-held since 1983, A&M is a global professional services firm that delivers performance improvement, turnaround management and business advisory services to organizations seeking to transform operations, catapult growth and accelerate results through decisive action. METIS provides custom-tailored platform technology for the management of data, documents and workflow, and has extensive experience in delivering solutions to the real estate industry, with a focus on financial institutions, equity funds and investors.

About The Author

[author_bio]

Managing Loan Compliance With Data

News reports suggest cautious optimism ahead for 2014. Unemployment is down to a level existing prior to the Great Recession, stock prices are moving higher creating wealth for those invested, and home prices in many markets are increasing as foreclosed properties are sold and assimilated into the market place. But caution still abounds and, if pressed to reduce my concerns for the upcoming year into one word, I would have to say it is all about the “data.” In my view, data is going to drive compliance efforts by in-house risk managers in two key areas: Fair Lending and qualified mortgages.

For a second time, the U.S. Supreme Court was denied the opportunity to weigh in on whether raw data is proof positive of discriminatory intent when the parties to that proceeding settled their claims. So the Department of Housing and Urban Development’s discriminatory effects rule, which became effective on March 13, 2013 and formalized the “disparate impact” theory, is now the baseline tool to establish liability under the Fair Housing Act.

My past articles in these and other periodicals harp on “scraping data” from your loan origination systems and/or mortgage document preparation software, looking for loan or borrower outliers. While any outliers might have a “legally sufficient justification” (i.e. necessary to achieve legitimate, nondiscriminatory interests of the lender which could not be served by another practice), acquiring knowledge of such outliers arms your compliance team with the information to take action, if needed. Likewise, knowledge of and assembly of that data is critical to your defense to claims of discriminatory intent through the use of statistics, asserted on behalf of a borrower or regulator.

Using data mined from your systems is also critical to compliance efforts under the Ability to Repay Rule, effective January 10, 2014. Under the Rule, loans, which are “qualified mortgages” because they were underwritten against certain guidelines designed to maximize repayment, enjoy either a “safe harbor” or rebuttable presumption that the lender complied with the Rule. But the Rule itself is a minefield of “if-then logic,” blending the amount of fees, points, amortization, debt levels, and APR, (calculated amounts in and of themselves), into a cornucopia of calculations that either stamps a loan with a “QM” designation … or not.

Being able to quickly drill down through the forest of data to find the offending data point(s) that prevents QM designation demonstrates the importance of the data. But, as I have pointed out in my past writings, understanding that data also enables you to peer into your business model and learn where profits are elusive, or your income is “leaking,” or which loan products (or even producers) are stellar performers. And if your mortgage document preparation system and/or loan origination system don’t give you the data in a usable form, compel your provider to do so … it’s your data and it’s essential to the financial health of your business and key to mitigating compliance risk.

Years ago, magazines would profile their subscribers and sell that mined data to other advertisers interested in reselling that data to a targeted audience.  Credit card statements were filled with leaflets suggesting products in line with your purchasing habits.  While the progression from paper to ever faster computers has produced data in volumes never before imagined, effectively funneling that data into manageable, useful “bytes” (the pun is intended) is a necessary step towards a best practices loan compliance program and equally serves as a scorecard to your financial health.

About The Author

[author_bio]

The Naughty Or Nice List

My grandchildren were here for Christmas and it was amazing how much they focused on whether they were on Santa’s “Naughty or Nice List”. After all, getting Furby Boom would make the difference between a successful or disappointing holiday so being good was paramount, for at least a few days. It struck me then that lenders are also faced with their own perpetual Naughty or Nice List. Only ours is not compiled by a jolly elf trying to bring happiness, but from a group of regulators trying to bring lenders’ into a transparent lending environment.

The CFPB has made it quite clear that they are collecting data from lenders, consumers and other resources. And the more data they collect the more they know about every company. So have you ever asked yourself, “What are they doing with all that data?” Or, “What do they know that I don’t?” Or, “Shouldn’t I at least know what they know?” and probably most importantly, “Ho, Ho, Ho – How can I know?” Having, using and understanding data seems to the key.

Having and using data is becoming a critical part of the day-to-day activities in any organization. On Sunday, January 12th, The Wall Street Journal included in an article on skills sets necessary in today’s job market, the need for understanding data and noted that it is “… an increasing important part of everybody’s day-to-day…” It is quite clear that everyone needs to understand data and know how it applies to you. Even more important is the need to make sure you even have the right data. This goes double for CEOs and other members of the executive management team.

So what data should management have and how should they use it to keep themselves off the “Naughty” list? All too often the only data that management members see are the production results. But is that all they really need to see? Maybe these executives should know what “big data” they need in order to give them a broader view of the organization. And just maybe they should be thinking about the data that the CFPB has collected on them and the borrowers they serve. An even more radical idea is to get industry data as well. That way it’s easy to compare your organization to everyone else and know how you stack up.

Unfortunately data is not something easily attained in this industry. We have acknowledged that data elements are not consistent and despite the best efforts of MISMO, there are still lenders who have and/or want to maintain their own propriety data. We also have systems that are inadequate at collecting the data we need and only recently have we begun to make sure the data we do have is accurate. So are lenders doomed to face regulators not knowing if they have been good or bad? What can lenders do today to give themselves a fighting chance to make it to the “Nice” list?

One way is to make use of the data that is available. Every lender now files HMDA data for example and that collective industry data is publically available. Yet how many lenders have actually used their data or the combined database to analyze their organization? How many lenders know how they compare to the industry when it comes to such things as sending our incompleteness notice or denial reasons? Well, the CFPB knows and if you are on the low end of the numbers you are certainly a “Naughty” list prospect. And that means, not just coal in your stocking, but exams and probably fines in your future.

TLI-Listen-Now

About The Author

[author_bio]