Analytics Living on the Edge of Cloud Computing

by   |   June 2, 2016 5:30 am   |   0 Comments

John Schmidt, Data Center Solutions Lead, CommScope

John Schmidt, Data Center Solutions Lead, CommScope

In their 2009 publication The Data Center as a Computer, Luiz André Barroso and Urs Hölzle articulate the basics of warehouse-scale computing or cloud computing. At the time of publication, the view of a data center as a singular compute entity was limited to a single data center. The authors purposely limited their scope “because the huge gap in connectivity quality between intra- and inter-datacenter communications causes programmers to view such systems as separate computational resources. As the software development environment for this class of applications evolves, or if the connectivity gap narrows significantly in the future, we may need to adjust our choice of machine boundaries.”

I believe that we are nearing the time when this connectivity gap will narrow to a point that the entirety of the network will function as a computer. This is driven by a dramatic increase in bandwidth as we move from 100G to 400G, coupled with consumer and client demands to support localized real-time analytics. Concurrently, we see changes in the network architecture to reduce latency and to bring computing closer to the user via edge network data centers. Let’s consider some of the network changes that are being driven by analytics and their impact.

Cloud Evolution

At the outset, data centers provided a means of long-term archival and disaster-recovery storage. Over time, the massive amounts of data housed in these data centers became an asset of unprecedented value, ushering in the era of big data analytics. However, the solution to one problem created another. Specifically, the amount of data to be analyzed is no longer localized in a data center, but is distributed across a network of data centers that make up the cloud.

Related Stories

Why Cloud Analytics is Better Analytics.
Read the story »

To Move Fast on Cloud Computing, Go Slow.
Read the story »

Cloud Computing Moves to the Edge in 2016.
Read the story »

Analytics in the Cloud: Find the Best Fit [Podcast].
Read the story »

A prime driver of this distribution is wearable technology. In 2015, there were 97 million wearables producing a collective 15 petabytes of data per month. This data is intimately related, very valuable, continually updated, and globally dispersed. Due to the constantly changing nature of this data, global integration is essential.

Another influence is video and content delivery networks (CDN). This had been a niche domain, but now major players like Google and Amazon are aggressively expanding their CDN via edge points-of-presence globally. Consumer and corporate tolerance for latency is decreasing as expectations for network performance is increasing. Emerging technologies such as augmented and virtual reality will continue to stress the network and consume additional bandwidth. Interactivity will provide additional strain on the network as consumers move from pure consumption of media, like video streaming, to two-way interaction with their content, such as gaming. Such advancements will continue to extend the network edge closer and closer to the user. As this happens, we will see a massive buildout of data centers that are optimized for edge deployment.

Edge Data Center Design

The edge of the cloud will need to efficient, low cost, and self-managing. For these reasons, we are seeing a major push toward colocation. By 2020, we expect more than 50 percent of enterprise data centers to be located in colocation facilities, up from 30 percent in 2015. Purpose-built data centers will become the norm as opposed to simply placing servers in existing corporate facilities that are not optimized for location or energy efficiency. We also expect to see a dramatic increase in modular data centers, particularly at the network edge. In fact, the market for modular data centers specifically at the network edge is expected to be over $1 billion in 2020. These systems can be placed in any location with access to power and high-speed fiber connectivity.

Conservation of power is a critical factor in deployment of modular systems. In certain environments, the majority of cooling can be done using adiabatic or “free cooling” systems, which allow the modular data center to achieve sub-1.05 power usage effectiveness (PUE) versus a data center industry average of 1.7.

Self-management of these edge data centers is also essential from both a cost and practical standpoint. It is not financially feasible to support edge data centers with the same level of on-site support that exists in the core of the network. Self-management to the greatest degree possible is achieved through integrated use of automated infrastructure management (AIM) and data center infrastructure management (DCIM). Used concurrently, these technologies will allow for remote management of all aspects of the data center, including the physical layer, power, cooling, and provisioning.

High-speed Interconnections

Another trend we see developing is the high-speed interaction of data center to data center connectivity. This is currently being achieved through high-speed links using high-fiber-count fiber-optic cables running at high speeds (100G+). These high-speed interconnects enable the connection of not only individual data centers, but also cloud-to-cloud interconnection via peering at Internet exchanges. Similarly, we continue to see expansion of trans-ocean fiber links connecting major Internet hubs globally. Direct linking of core to core/edge data centers will be a critical advancement for improving performance. The higher the number of direct links, the lower the number of hops between user and cached data, thus improving latency and speed of bi-directional access. In the near future, advancements in optics technology such as silicon photonics may further lower costs of deployment for high-speed interconnects, increasing their adoption across the network.

Big Data/Big Networks

In the coming years, we will continue to see a massive amount of growth in data and content consumption from all parts of the network. This will drive changes, primarily expansion of the network to the edge, closer to the users and suppliers of data, whether they are human or machine. Preparing for this reality will be essential for service providers, OEMs, and cloud architects. The result will be reduced latency and superior network performance. These gains will quickly be consumed by an ever more demanding consumer base as well as richer, more interactive content and data. This has been the progression of cloud networking in the past and it likely will bring us capabilities in the future that can only be imagined today.

John Schmidt manages CommScope’s Global Data Center Solutions group, leading a team of solutions architects and segment leads. He has a diverse background in a wide range of technologies from the perspective of design engineering, product management, and sales management. He has a strong track record of efficiently managing P/L and exceeding revenue and profitability targets.

Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise, plus get instant access to more than 20 eBooks.








Tags: , , , , ,

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>