The 7 Drivers of Public Cloud Complexity

by   |   September 6, 2016 5:30 am   |   0 Comments

Joe Kinsella, CTO and founder, CloudHealth Technologies

Joe Kinsella, CTO and founder, CloudHealth Technologies

In the last few years, the public cloud has shifted from a technology adopted primarily by fast-growing technology companies to a mainstream infrastructure platform used
by an increasing number of enterprises. This shift has been driven by many of the powerful benefits of the public cloud, including increased agility, access to global infrastructure, consumption-based pricing, elasticity, and reduced operating costs.

But this shift also has come with an unexpected consequence: a growing complexity that organizations must confront in order to prevent cost overruns, lack of compliance, reduced agility, or even the failure of an entire cloud strategy.

As many of the early adopting enterprise CIOs are learning, public cloud complexity can be the single greatest obstacle in your path to success. Here are the seven drivers of cloud complexity.

 

Complexity of Knowledge

The public cloud requires the adoption of new technologies, services, processes, and methodologies into an organization. While many of the basic concepts of Infrastructure as a Service (IaaS) are based in technologies we have worked with for a decade or more, their use in the public cloud requires new means of access (e.g., APIs, web consoles), new methodologies (e.g., DevOps, infrastructure as code), and new approaches to management (e.g., shared environment, API-driven, vendor management). To further exacerbate the complexity, we have a growing number of platform services, each of which requires deeply specialized knowledge to deploy and manage (e.g., Google Bigtable, Azure Machine Learning, AWS Kinesis).

Complexity of Dynamic Infrastructure

Public cloud infrastructure is shorter-lived than its physical or virtual predecessor, driven by the social engineering of on-demand infrastructure and consumption-based pricing. The shorter-lived infrastructure has resulted in a change in how we approach managing it, from driving it based entirely on declared and fully automated tools (e.g., Chef, Ansible), to designing it to take advantage of on-demand infrastructure (e.g., autoscaling), to adopting new approaches to its management (e.g., DevOps, NoOps).

Complexity of New Tools

Related Stories

Understanding the Public Cloud’s Benefits and Limitations.
Read the story »

How to Juggle Multiple Cloud Platforms Effectively.
Read the story »

Analytics Living on the Edge of Cloud Computing.
Read the story »

With Predictive Analytics in the Cloud, the Sky’s the Limit.
Read the story »

For almost a decade, the management of enterprise infrastructure was driven by the big five IT Operations Management (ITOM) providers: IBM, HP, Microsoft, BMC, and CA. Starting in the mid/late 2000s, we saw a burst of innovation that brought new products across all segments of ITOM, including configuration management (e.g., Puppet, Chef), application performance management (e.g., New Relic, AppDynamics), log management (e.g., Splunk, Sumo Logic), incident management (e.g., ServiceNow, PagerDuty), and security management (e.g., Alert Logic, Evident.io). These modern/cloud-centric products and services have revolutionized how we manage our infrastructure. But they also have introduced multiple new consoles, APIs, and vendors to our teams.

Complexity of Rapid Innovation

One of the biggest attractions of the public cloud has been the incredible pace of innovation that has come from providers. Private clouds from enterprise IT have been substantially outpaced by the rapid innovation coming from Amazon, Microsoft, and Google. Amazon alone had over 450 feature releases last year and has increased that pace this year. This innovation includes both new services (e.g., Google Function, AWS Snowball, Azure Machine Learning), as well as changes to existing services (e.g., new instance/machine types). Each new service or change to an existing one can deliver a benefit to your organization, but also requires that your team takes the time to understand and leverage its potential.

Complexity of Management

An average organization typically will use 10-12 different point tools in the day-to-day support of its public cloud infrastructure, each providing vertical value for some segment of a management stack. A typical company might use AWS as its cloud provider but still rely on a variety of tools for day-to-day management, such as Chef, Datadog, New Relic, Sumo Logic, Pagerduty, Docker, Alert Logic, and ServiceNow. While each tool can be purpose-built to solve a specific need, we often require data coming from multiple tools to drive decision making. As a result, in return for the powerful innovation coming from products and services supporting our clouds, we are driven to look through multiple “keyholes” to manage our growing public cloud infrastructure.

Complexity of Distributed Ownership

Managing enterprise infrastructure before the public cloud required the joint participation of multiple stakeholders across IT, lines of business, and finance. The decision making occurred at a predictable but much slower pace that allowed time for cross-functional collaboration (e.g., quarterly). The public cloud, however, disrupted this model, pushing ownership of decisions to the technical teams consuming the infrastructure and requiring a much higher velocity of decision making. While we have achieved incredible agility with distributed/decentralized ownership, organizations are still struggling with how to centrally govern cost, availability, performance, security, and usage of our infrastructure.

Complexity of Continuous Optimization

The pace of change in the cloud has shifted the requirements for the velocity of managing infrastructure. While a pre-cloud organization might be able to manage cost on a quarterly basis, security on a weekly basis, and performance at a monthly cadence, organizations running in the cloud require constant optimization that, in many cases, approaches real time. While continuous optimization is one of the downsides of the agility we derive from the public cloud, if well managed it can deliver efficiency and smart growth to to an enterprise.

As the public cloud goes mainstream in the enterprise, we are confronted with the growing complexity that has resulted from its adoption. Enterprises are struggling with creating and adopting new processes, methodologies, products, and technologies to support the growing usage across their organizations. Successfully managing the inherent complexity in the public cloud can result in a level of agility and flexibility that can provide a strategic advantage to a business. But the failure to manage these same complexities can result in problems up to and including failure of the cloud strategy.

In Part 2 of this article, we will discuss what some enterprises are doing to manage the growing complexity of the public cloud successfully.

Joe Kinsella is founder and CTO of CloudHealth Technologies, one of the fastest-growing companies in the emerging Cloud Service Management field. Joe is focused on helping organizations realize the full potential of the cloud, without having to sacrifice cost, performance, availability, or service level. With 20+ years of experience delivering software for companies of all sizes, Joe sees CloudHealth bringing the cloud to the enterprise by enabling the next generation of IT service management for the cloud.

Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise, plus get instant access to more than 20 eBooks.








Tags: , , , , ,

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>