Today’s IT infrastructures are more complex than ever, as companies combine a wide variety of resources spanning physical, virtual, and cloud platforms to support their IT operations. These new data centers reduce costs and increase mobility. IT departments can quickly align operations, deploy applications, and configure systems to best meets constantly changing business needs. But with this convenience and flexibility, new challenges arise.
In the early days of virtual computing, companies primarily migrated applications that were not business critical to virtual and cloud environments. Business-critical applications, such as SQL Server, Oracle, and SAP, remained on dedicated physical servers so that IT maintained tight control and ensured that required levels of performance, efficiency, and reliability were achieved.
Now, as IT seeks to lower costs and build flexibility across all of its operations, the trend toward moving business-critical and database applications into virtual environments is growing fast. But despite the cost savings and flexibility of these environments, there are significant challenges. Optimizing and ensuring performance and availability in a virtual environment is far more challenging than the orderly and disciplined environment of dedicated physical servers. The large-scale, shared, and dynamic nature of virtual environments makes it difficult to understand and address even simple application and performance problems. These environments are made up of applications, storage devices, network devices, and services that operate in a complex relationship to one another. Because these devices and services are interdependent, the operation of one can affect the resources used by one or more of the others. Understanding how these subtle and complex relationships affect performance and the ways to correct performance issues is daunting.
The other challenge in virtual and cloud environments is planning and growing the infrastructure. Because the typical virtualized environment has hundreds to thousands of virtual machines, along with applications, hosts, storage, and network resources that are configured in an interdependent, highly dynamic environment, simple changes made to the environment can sometimes produce unanticipated and serious consequences. IT departments must rely on staff with specialized skills, but who many times must make educated guesses to identify problems, improve configurations, and evaluate the impact of changes.
While there are tools to help IT departments solve performance problems, these tools are expensive, complex, and manual for most small-to-medium sized enterprises. They also are focused on one aspect of the infrastructure, such as application or host performance, database management, analytics, or network monitoring. Others monitor one resource, such as a storage device or network, based on CPU utilization – a manually set threshold. If an object exceeds the threshold, it starts sending a flurry of alerts that may or may not be important and often mask a bigger problem.
None of these tools provides a view that considers the entire infrastructure and all of the complex interdependencies. IT specialists are left using a time-consuming process of manually sifting through the presented data to understand the complete picture of an issue by identifying and resolving problems that involve several areas (application and storage, for example). These efforts usually involve a team of skilled IT experts individually analyzing the data within the confines of their specific focus areas (network, storage, application, or server) and then working together to assemble the data, identify a problem, and develop a solution. They are making educated guesses based on the information at hand but do not have the big-picture view. They might not be receiving the information they need to find the best solution. Or they are receiving so much data about the system that they are overwhelmed with the “noise,” and this prevents them from making informed, accurate decisions. If there were a way to view the entire infrastructure to determine how one problem affects the entire system, IT departments could solve problems more effectively and efficiently, resulting in tremendous time and cost savings.
There is such a solution: machine-learning–based analytics.
Machine-learning–based analytics marks a vast improvement on one-dimensional, threshold-based tools by focusing on knowledge discovery rather than on report data or metrics. The next-generation machine-learning analytics tools deliver actionable information to IT administrators, saving IT hours of manual time comparing reports and information from multiple sources to identify issues.
These advanced machine-learning–based analytics tools use adaptive technology to teach themselves about the infrastructure, and the interrelationships of its various components, over time. Rather than simply reporting a point-in-time status or imprecise data averages, these tools provide valuable, powerful knowledge of the infrastructure that helps IT personnel predict, simulate, and recommend solutions.
Benefits of Machine Learning
Advanced machine-learning–based analytics solutions are highly automated and eliminate the manual configuration and rules definition required by conventional monitoring tools. Because they require no agents or complex configuration, IT can set up some machine-learning technology in as little as 15 minutes and start gaining important insights about infrastructure and application operations without requiring time-consuming manual analysis from multiple domain experts. Machine-learning technology integrates and analyzes a wide range of data types and automatically makes specific recommendations to resolve application performance problems. IT personnel don’t need to spend hours interpreting data or guessing at a solution. They can respond quickly to the real problem, resulting in minimal or no downtime.
Modern machine-learning technologies automatically discover the topology of the infrastructure to uncover complex and hidden relationships between network, storage, infrastructure, and applications without manual assistance. With an understanding of the topology and by applying behavioral analysis to interpret the data, this approach continuously improves its accuracy and effectiveness. It automatically learns and incorporates into its analysis the knowledge it gains about the day-to-day, week-to-week, and month-to-month characteristics of a company’s infrastructure.
Unlike conventional monitoring dashboards, which display only data and utilization metrics regarding CPU, memory, storage, and networking, machine-learning systems provide an overview of the operational state of the infrastructure and application. IT personnel can monitor status, identify and resolve problems, explore infrastructure improvements, and tune the efficiency of infrastructure operations quickly and easily. Machine-learning dashboards are touch operated and support any platform and resolution, allowing easy access from any location. No other tools are required.
Machine learning is the future of the new data center, and with the latest advancements in this technology, the future is here. This robust technology allows IT departments to troubleshoot and optimize the infrastructure quickly and accurately the first time. As a result, IT can focus on larger challenges and solutions that will help their companies operate efficiently and remain competitive in the marketplace.
Jerry Melnick is President and CEO of SIOS Technology Corp. Jerry is responsible directing the overall corporate strategy for SIOS Technology Corp. and leading the company’s ongoing growth and expansion. He has more than 25 years of experience in the enterprise and high-availability software markets. Before joining SIOS, he was CTO at Marathon Technologies, where he led business and product strategy for the company’s fault-tolerant solutions. His experience also includes executive positions at PPGx, Inc. and Belmont Research, where he was responsible for building a leading-edge software product and consulting business focused on supplying data warehouse and analytical tools. Jerry began his career at Digital Equipment Corporation, where he led an entrepreneurial business unit that delivered highly scalable, mission-critical database platforms to support enterprise-computing environments in the medical, financial, and telecommunication markets. He holds a Bachelor of Science degree from Beloit College, with graduate work in Computer Engineering and Computer Science at Boston University.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise.