Machine learning is generating a tremendous amount of attention these days from the press as well as the practitioners. And rightly so – machine learning is a transformative technology. But despite the references to the topic, the money raised from venture capitalists, and the spotlight that Google is bringing to the subject, machine learning is still poorly understood outside of a core group of highly technical leaders.
This has the effect of underestimating how transformative machine learning is going to be. It also has the effect of shielding business leaders from what they need to do to prepare for the era of machine learning.
Let’s discuss both sides of the sword – the promise and the pitfalls, starting with a definition.
Machine learning is a class of algorithms that can learn from and make predictions on data. Generally speaking, the more data, the better the outcome for machine learning techniques. Machine learning doesn’t require explicit rules to govern performance. It does not require manual construction of “if this, then that.” It will make that determination on its own, based on the data.
The transformative effect of machine learning, and why it is so important now, is a function of that fact that we are hitting trigger points across data, compute, and algorithmic sophistication.
This confluence of advances with each of these elements makes the machine learning appear to be a sudden success. That’s a bit of mirage – what is happening today has been in the works for quite some time. Let’s take a closer look at these items:
Data. The emergence of new database technologies (think Hadoop) has made the collection of massive amounts of data incredibly inexpensive. Companies don’t choose what to save and what to delete at this point; it’s just easier to store everything. If the value of the data is not apparent today, perhaps it will emerge later. This provides a massive corpus for machine learning algorithms, which have an inexhaustible appetite for data.
Compute. The advances in compute continue to amaze. Years after the demise of Moore’s Law was forecast, researchers at Intel, IBM, Nvidia, and others continue to crank out innovation after innovation, keeping Moore’s Law alive and well. Big problem? No problem, add a couple dozen or a couple hundred cores, on demand. This has limits, however, as not every problem can be brute forced.
Algorithmic sophistication. Interestingly, algorithmic sophistication is related to data and compute. Because of the advances in those areas, it is now possible to explore the algorithmic space more completely with more sophisticated algorithms. Examples of this include topological data analysis, which needed the compute advances to view these ever increasing datasets from a variety of algorithmic angles.
Machine learning is transformative because it dramatically accelerates high-performance outcomes. Researchers have worked on image-recognition problems for decades, but Google effectively perfected it in few quarters once they tuned the machine-learning algorithm. Given the size of the corpus and the sophistication of the team, it is unlikely anyone will ever surpass them in this area.
This kind of innovation is happening at a handful of companies, companies that collectively employ the vast majority of machine learning talent in enterprises today. Those companies include Google, Facebook, Amazon, Apple, IBM, GE, plus a handful of startups that are hyper-focused on disrupting specific applications or industries.
Those companies are investing heavily in machine learning because it enables exponential growth. In an exponential growth world (and that’s what machine learning enables), even 10 percent less growth will result in getting left behind. Starting too late but having the same growth rate will have the same effect.
While the reward for exceptional execution is exponential growth, the reality will be a series of discontinuous events that keep this growth from being a smooth line. How a company deals with those discontinuous events will define winning and losing.
Those discontinuous events are the other edge of the machine-learning sword – the elements that can derail the competitive advantage associated with this technology.
Here are a few:
Technical debt. Machine learning systems are not self-replicating or self-optimizing software applications. As a result, over time they accumulate technical debt. Technical debt manifests itself in a variety of ways, including entanglement, hidden feedback loops, underutilized data dependencies, pipeline jungles, and undeclared customers, to name a few. Technical debt results in unintended consequences, brittleness, and obfuscation. All of these are sub-optimal and have implications for the performance of the system.
Understanding the details of technical debt is the responsibility of the technical teams. Understanding the concept and implications is the responsibility of the management team.
The black box. Certain algorithmic approaches are black boxes in terms of understanding what is going on, particularly for singular data points. This is not always a problem but, for the most part, it presents real challenges for an organization, both culturally and technically. If the algorithmic approach is a black box and the world changes in such a way that the model underperforms, the lack of understanding puts the system at risk from its skeptics. The inability to explain why the model failed can set an organization back years in terms of buy-in.
Algorithmic selection. While this is not a surprise, there is no “God” algorithm in machine learning. No algorithm is equally good at text analytics, pattern matching, segmentation, anomaly detection, and feature generation.
Indeed, there are dozens of powerful approaches and thousands of highly tuned variants of those algorithmic classes, each with its own distinct set of advantages and disadvantages. Ultimately, different algorithms serve different purposes. For example, your logistic regression model (LRM) sees the world of data very differently than your support vector machine (SVM). What that means is that, as a data scientist or a computer scientist, you put down that LRM and you pick up your SVM. They are different tools for different jobs. These aren’t just different sized wrenches however, and putting down a LRM to pick up a SVM is very time consuming.
Keeping an organization attuned using the right algorithmic tools is as important as knowing when it is appropriate to use net present value (NPV) or internal rate of return (IRR).
Human bias. Related to algorithmic selection is the concept of human bias. Ultimately, machine-learning algorithms are complex mathematical formulas, and mastery of a specific approach leads the practitioner to lean toward that approach – often heavily. This tendency brings to mind the old adage, “When all you have is a hammer, everything looks like a nail.” If everyone on your machine-learning team graduated from the same school at the same time, the odds are that they are using the same algorithms. Injecting algorithmic diversity in your organization will have significant benefits for the organization.
Addressing the Pitfalls
With technical debt, leadership has to ensure that alongside great mathematicians sit great software engineers. One without the other will be out of balance and result in problems down the road. Find and hire both.
For the black box problem, you have to rely on hundreds of years of statistical knowledge to shed light why individual decisions were made in the model. This requires real rigor, but it’s paramount to address those times when you need to know why the algorithm made a decision. This will be critical to your efforts to create a machine-learning culture. People need to trust the system, and statistics can provide the explanatory bridge.
For the algorithmic selection challenge, the answer is to employ more and more algorithms so you don’t have to select them. The compute power is in place to do this, the frameworks developed to manage multiple algorithms operating in parallel on the dataset. Take advantage of that.
Finally, if you have employed multiple machine learning algorithms, your human bias issue should solve itself – particularly if you have adopted techniques that automate the process, thereby using data to find the best algorithmic approaches automatically.
The Opportunity Ahead
Machine learning will live up to the hype. Those in the know are highly confident that this is truly transformative – across every job, every workflow, and every business process.
Organizations that take the initiative will be rewarded commensurately. But understanding the promise and the perils is important, as a passing familiarity with the subject of machine learning is not sufficient. Now is the time to dig in, learn, hire, and invest, because tomorrow could be the day your competitor starts up the ramp.
Gurjeet Singh is Ayasdi‘s CEO and co-founder. As the CEO of Ayasdi, he leads a technology movement that emphasizes the importance of extracting insight from data, not just storing and organizing it.
Gurjeet developed key mathematical and machine learning algorithms for Topological Data Analysis (TDA) and their applications during his tenure as graduate student in Stanford’s mathematics department, where he was advised by Ayasdi co-founder Prof. Gunnar Carlsson.
Gurjeet is the author of numerous patents and has published in a variety of top mathematics and computer science journals. Before starting Ayasdi, he worked at Google and Texas Instruments. Gurjeet was named by Silicon Valley Business Journal as one of their 40 Under 40 in 2015.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise.