Big data analytics is being brought to bear on just about any business problem you can think of these days, including internal operations. Departments from sales to customer service to human resources have tapped big data analytics for insights into ways to improve productivity, streamline processes, and cut costs.
This is true even for the traditional keepers of the data: the IT department. IT faces performance and budgetary pressures like any other department within the organization and can benefit from insights that can improve performance while reducing costs.
Data Informed spoke with Gary Oliver, CEO of Blazent, about IT departments’ struggles to glean insights from disconnected and siloed point solutions and data, and how big data analytics can help optimize their operations.
Data Informed: How does big data improve on the way that IT monitoring traditionally is done?
Gary Oliver: There are many very good solutions that are purpose built to monitor or manage very specific segments of IT, but they don’t provide the overall view needed. Some examples are network management, data center operational performance monitoring, anti-virus, service management, and application dependency mapping, to name just a few. These systems typically rely on agentless scans or deploying software to the environment to measure and control specific components of the infrastructure. Most companies have dozens of these solutions deployed, and each of these solutions typically has its own set of reporting on the specific area it focuses on.
The problem is that each of these systems only knows what it knows, but does not know what it is missing, which leads to dangerous gaps in coverage, security, increased outages, and increased time to resolve these outages. Furthermore, to get a complete picture of the overall IT environment for planning purposes and effective operational and financial management, it is critical to bring all these data sources together with context to provide a complete picture of the IT estate. This is where big data concepts and technologies are very beneficial.
The variety, velocity, and volume of data from these various systems will vary greatly. Some of these tools stream performance or network traffic data in real time, others provide information around dependencies or service relationships, and still others are related to financial measures or human resources. The ability to bring together these very different sources, provide continuous ETL (extract, transform, and load) capability in real time and then process the data through very complex algorithms to ensure that it is complete and accurate requires high performance, linear scalability, and advanced analytics. A perfect fit for big data capabilities.
Talk about the evolving role of IT and of the position of CIO.
Oliver: As IT moves from a back office function to a strategic business weapon, the role of the CIO and that of IT has changed dramatically. The good news is that what was “the business of IT” is really now just “the business.” The role of CIO is as strategic as any other executive in C-suite. The first priority of the CIO is to keep up with business demand, meaning that if technology can be used to provide a new service or product differentiation or to better enable employee productivity, the CIO wants to partner with the other C-level executives and to help source the new service, whether through internal or external resources. The point is that IT must now select and manage multiple internal and external providers to maximize speed, cost, and performance, so strategic sourcing has to be a core skill set within IT. This requires visibility and governance across these providers to make sure that the overall IT mission is being met and that individually they are meeting both financial and SLA objectives as defined in the contract. This is where it is important to combine the many siloed data sources into a complete picture of IT for good governance.
How has the growing complexity of the IT environment, brought on by things like BYOD, impacted organizations’ ability to monitor IT performance?
Oliver: With the consumerization of IT, the CIO and his or her team is pressed with pushing application functionality to mobile devices and to enable automation of business process experiences similar to that of Google, eBay, Amazon, or other popular business-to-consumer applications. This is especially true with the growing number of younger customers and new employees. They grew up using these applications and simply expect this kind of user experience. Combine this with managing multiple providers, the concept of BYOD requiring that IT ensures that these applications run on nearly any device that an employee or department chooses to use, and you have increased complexity levels. The ability to control security and guarantee performance, user experience, and cost effectiveness becomes extremely difficult.
The days of locked-down environments in which all development and delivery happens through one internal source are long gone, but the accountability to deliver cost-effective IT that is agile, secure, and well governed still falls on the shoulders of the CIO. The CIO needs to be constantly looking for the optimal way to deliver all of these services while managing costs, controlling risks, and providing good governance, in addition to being able to demonstrate performance to business leaders and customers. All of this requires an advanced level of visibility and forward-looking decision making that harnesses the complexity of IT systems and data and turns it into business advantage.
What are the some implications for business if there isn’t visibility into IT operations?
Oliver: Without complete forward-looking visibility across IT operations, decisions get made with incomplete or bad data. This most certainly will lead to a higher number of outages, because changes will be made to the environment without a complete understanding of the underlying state of the target environments prior to the change. One of our large financial services customers tells us that with certain applications, a minute of downtime can relate to $60 million in lost revenue. When an outage does occur, restoring that service as quickly as possible is critical, and that requires a complete view of what the state of the environment was prior to the outage, any changes made to the environment, all of the underlying components, users impacted, related infrastructure involved, etc. Having partial or inaccurate data to resolve these situations leads to longer outages and more lost revenue.
Security and risk is another area of impact. Effective risk management starts with knowing everything about the environment and making sure that the IT estate is well protected, backed up, and secure. This becomes nearly impossible without the context and visibility across all of these point solutions and data to provide the insights necessary to stay on top of the situation.
How can big data analytics improve that visibility and optimize IT operations?
Oliver: Effective measurement and planning can be done only when dealing with a comprehensive view across the estate, understanding historical trends, and being able to project the future to evaluate delivery and computing options.
One of our customers was looking to consolidate 115 data centers while simultaneously evaluating all of its legacy applications and costs to upgrade unsupported technologies. The company was looking at the financial and speed benefits of engaging multiple providers to help deliver some components of the applications and services. It would have been impossible to cobble together the 50-plus different point solutions that were giving the company very narrow views of their infrastructure without the overall context and analytics to drive effective decisions, and then track the progress of those decisions as they deployed.
Regardless of the millions of dollars spent on point solutions, an organization is only as secure as the weakest link. It is not uncommon for us to find hundreds, if not thousands, of servers or other devices on the network that were previously not known to IT. And because they were not known, they typically are not being backed up or running the standard set of tools required for security. In many cases, these systems are running software without the proper licenses, and this leads to audit exposure. Another one of our clients found 30,000 devices on its worldwide network that the network team did not previously know about.
The point here is to know what you don’t know, and this requires bringing together all the data across all of the systems, in context.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise.