One of the first introductions of artificial intelligence to the general population came in 2011 when Watson competed on Jeopardy. Ken Jennings and Brad Rutter were arguably the best players the show had produced over its decades-long lifetime. In total, they had walked away with more than $5 million in prize winnings, a testament not only to the breadth and depth of their knowledge, but their strategic savvy with category selection and wagering. Watson, a computer system developed by IBM, was capable of answering questions posed in natural language. Watson had access to 200 million pages of structured and unstructured content, but was not connected to the Internet. Watson consistently outperformed its human opponents on the game’s signaling device, but had trouble in a few categories, notably those having short clues containing only a few words. The key here was the development of a natural language processor that would become the foundation for numerous future applications like Siri. But while its rapid responses to questions may have struck many as robotic, Watson was not a robot in the traditional sense. Robots are machine built to carry out physical actions and may or may not be designed to approximate the human form. I am sure many of you remember the TV series, ‘The Jetsons’ with Rosie, the humanoid robot maid and housekeeper. Or maybe not, because that series was on in the early 1960s.
‘In Search of a Robot More Like Us’ was a 2011 New Your Times science article written by John Markoff. He stated that:
The robotics pioneer Rodney Brooks often begins speeches by reaching into his pocket, fiddling with some loose change, finding a quarter, pulling it out and twirling it in his fingers. The task requires hardly any thought.” But as Dr. Brooks points out, training a robot to do it is a vastly harder problem for artificial intelligence researchers than IBM’s celebrated victory on Jeopardy…. Although robots have made great strides in manufacturing, where tasks are repetitive, they are still no match for humans, who can grasp things and move about effortlessly in the physical world. Designing a robot to mimic the basic capabilities of motion and perception would be revolutionary, researchers say, with applications stretching from care for the elderly to returning overseas manufacturing operations to the United States.
So, let’s leave the discussion about robots for another time. Instead, I’ll focus on defining augmented intelligence and differentiating it from artificial intelligence. It’s more than a question of semantics. Artificial intelligence, perhaps from its popular culture use in general and its science fiction use in particular, can conjure up images of the sentient machines with personal agendas. It suggests a culture where, at least in some part, humans are no longer required to make decisions. Some industry experts believe that the term artificial intelligence can create more negative speculation about the future than hope.
Whatis.com defines augmented intelligence as an alternative conceptualization of artificial intelligence that focuses on AI’s assistive role, emphasizing the fact that it is designed to enhance human intelligence rather than replace it. An alternative label for artificial intelligence also reflects the current state of technology and research more accurately.
According to an article by Athar Afzal, “We’ve transitioned from an agricultural-dominated society to the industrial revolution – and now to a more data-driven economy. What we’ve witnessed during each of these stages is some form of mechanics or machinery developed to augment our performance, thereby improving our outcome…. The world has a lot of opportunity to gain and make our lives better with augmented intelligence – it’ll make our lives far smoother and more enjoyable. I invite everyone to view Ginni Rometty’s speech at the World Economic Forum.”
Researchers and marketers hope the term augmented intelligence, which has a more neutral connotation, will help people understand that AI will simply improve products and services, not supplant the people who use them.
While a sophisticated AI program is certainly capable of making a decision after analyzing patterns in large data sets, that decision is only as good as the data that human beings gave the programming to use. The choice of the word augmented, which means “to improve,” reinforces the role human intelligence plays when using machine learning and deep learning algorithms to discover relationships and solve problems. I’ve summarized some definitions by Jean-Albert Eude below.
Machine learning is a type of artificial intelligence (AI) that allows software applications to become more accurate in predicting outcomes without being explicitly programmed. The basic premise of machine learning is to build algorithms that can receive input data and use statistical analysis to predict an output value within an acceptable range. The processes involved in machine learning are similar to that of data mining and predictive modeling. Both require searching through data to look for patterns and adjusting program actions accordingly.
Deep learning is an aspect of artificial intelligence (AI) that is concerned with emulating the learning approach that human beings use to gain certain types of knowledge. At its simplest, deep learning can be thought of as a way to automate predictive analytics. While traditional machine learning algorithms are linear, deep learning algorithms are stacked in a hierarchy of increasing complexity and abstraction. Each algorithm in the hierarchy applies a non-linear transformation on its input and uses what it learns to create a statistical model as output. Iterations continue until the output has reached an acceptable level of accuracy. The number of processing layers through which data must pass is what inspired the label “deep.” The advantage of deep learning is that the program builds the feature set by itself without supervision. This is not only faster, it is usually more accurate. In order to achieve an acceptable level of accuracy, deep learning programs require access to immense amounts of training data and processing power, neither of which were easily available to programmers until the era of big data and cloud computing.
The value of such augmented predictive analytics to a segment of the economy as dependent on data as the mortgage industry is obvious. What is also obvious, unfortunately, is that we may be among the last to seat ourselves at the technology table.
Often, an early title or tag line for a concept or theory evolves over time as others develop their ideas and work toward a solution. In the mortgage industry, the concept of paperless mortgages was proposed in the early 1990s to reduce and/or eliminate what some conceived as unnecessary paper and to improve the overall experience for the consumer. Along the way we started referring to it as an electronic mortgage (e-mortgage) and now it is the digital mortgage, an all-inclusive data and documents packaged in a format for both human and machine consumption. That will certainly achieve the initial objective to eliminate paper and improve the consumer experience. The operational benefits extend from origination all the way through to the secondary market.
But going digital without building the internal architectures to capitalize on data-driven support technology is like going to a 3D movie, but not putting on the 3D glasses to watch it. If we don’t keep moving our own finish line, we risk being trampled by those with a longer view of the race.