We live in an era of high-speed wireless, flash storage and cloud computing on tap. But it doesn’t always feel that way, does it? The spinning ball on your desktop, the query that takes hours to run—IT delays are still a fact of life.
That’s one of the reasons for the surge of interest in flash storage. IDC recently reported that the total all flash-based storage market generated $955.4 million in revenue during the fourth quarter of 2015, up 71.9 percent year over year. But anyone expecting flash alone to solve their performance problems is in for a disappointment. New research shows that slow storage is just one reason for delays between data requests and data delivery—a phenomenon known as the app-data gap. The full explanation lies deeper in the data center.
The app-data gap is real, to be sure. A recent study found that application delays cost U.S. companies as much as $7.5 billion a year. Of the 3,000 IT professionals surveyed, nearly half said they lose more than 10 percent of their workdays waiting for software to load. More than 40 percent of business users say they avoid using certain applications at work because they run too slowly.
The problems created by the app-data gap are real as well. The gap disrupts data delivery, degrades worker productivity, creates customer dissatisfaction and damages a company’s overall speed of business and reputation. But the source of the gap is hard to pin down. Slow storage usually takes the blame. But another recent study, which examined operations at thousands of different data centers, found that only 46 percent of all application delays could be attributed to slow storage.
The greater culprit is growing data center complexity, with its multiple layers of networks, servers, storage, hypervisors, operating systems and applications. The study shows that a full 37 percent of application delays have to do with problems in configuration and interoperability. Another 15 percent relates to problems having to do with compute, virtualization and best practices.
Flash storage can take you a long way. But it leaves more than half the problem untouched—which is the harder bit to fix. Even IT experts can’t predict where problems deep in the data center are going to crop up. Luckily, machine learning can.
To close the app-data gap, IT organizations need to leverage predictive analytics that harness data gathered from thousands of sensors across every piece of the data center. This enables them to:
– Identify poor performance before users are affected. Machine learning can be used to identify high-performing environments across an organization’s data center, creating a baseline to be used for identifying poor performance and automatically providing actionable insight.
– Minimize or eliminate the effects of an issue. With big data, organizations can correlate vast amounts of information across the infrastructure to detect and rapidly identify the root cause of performance issues, resolving the problem before its effects are felt across the organization.
– Prevent businesses from encountering the same problem as their competitors. With machine learning, once one customer’s problem and root cause have been determined, its “signature” can be used to identify which other customers might be affected. Additionally, a “rule” can be created to prevent the same issue from reoccurring.
– Continually improve performance and availability for users. Machine learning technology can flag potential issues and abnormal behavior, recommending steps to return an environment to peak health, and continually improving the performance and availability of an environment.
Businesses are already reaping the benefits of these machine-learning systems. By implementing a predictive analytics solution, Rent-A-Center has been able to quickly and reliably access and process data for forecasting and store operations across its 4,000 locations. Rent-A-Center is able to perform analytics and auditing of the company’s retail locations on a daily basis, and see how each promotion is performing to decide what should live on. Similarly, the City of Pueblo, Colorado uses predictive analytics to identify and prevent performance issues across its business-critical applications, including its latency-sensitive 911 call dispatch application that is critical to ensuring public safety.
Switching to fast flash storage is an excellent step toward boosting business performance and reducing application downtime. But it’s only the first step. To fully close the app-data gap, organizations need to revisit how they monitor their infrastructure. They can’t just evaluate speeds, feeds and price. Nor can they rely on traditional metrics for infrastructure reliability and availability, which primarily measure redundancy of each component but do little to ensure that all components interoperate correctly. They need to integrate predictive analytics from the very start. If they do all that, everything else follows: faster performance, better productivity and happier customers.
Rod Bagg is vice president (VP) of analytics and customer support at Nimble Storage, where he leads worldwide support and drives support automation and advanced data science initiatives. Rod joined Nimble in 2009 and conceived and developed InfoSight, which is now recognized as a clear differentiator in the industry for advanced, cloud-based operational analytics and storage lifecycle management. Prior to Nimble, Rod served as VP of engineering at Glassbeam, where he co-founded the Glassbeam data analytics Software-as-a-Service. Rod has held senior management positions at Infloblox and at NetApp, where he was responsible for product support, support automation and RAS initiatives.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise.