Cloud Computing Moves to the Edge in 2016

by   |   December 31, 2015 5:30 am   |   1 Comments

Andy Thurai, left, and Mac Devine

Andy Thurai, left, and Mac Devine

The year 2016 will be exciting in terms of applied technologies. We see a lot of technologies maturing and moving from lab exercises to real-world business technologies that solve real-life customer problems – especially in the areas of digital transformation, API, cloud, analytics, and the Internet of Things (IoT).

In particular, we see the following areas evolving faster than others:

 

Year of the Edge (Decentralization of Cloud)

Cloud has become the ubiquitous digital platform for many enterprises in their quest to provide a single unified digital platform. Integrating the core IT with the shadow IT has been the main focus for the last few years, but in 2016 we anticipate the next step in this process. We started seeing companies moving from the central cloud platforms toward the edge, or toward decentralizing the cloud. This is partly because, with the proliferation of IoTs, operations technologies (OT) and decision intelligence need to be closer to the field than to the central platform.

Related Stories

Analytics in the Cloud: Find the Best Fit.
Read the story »

Systems of Discovery: Where Cloud, Big Data and IoT Intersect.
Read the story »

Clearing the Air Around Cloud Computing.
Read the story »

How to Handle Hybrid Cloud Security Challenges.
Read the story »

Cloud has become the massive centralized infrastructure that is the control point for compute power, storage, process, integration, and decision making for many corporations. But as we move toward IoT proliferation, we need not only to account for billions of devices sitting at the edge, but also to provide quicker processing and decision making capabilities that will enable the operations technologies. Areas of low or no Internet connectivity need to be self-sufficient to enable faster decision making based on localized and/or regionalized data intelligence.

An IDC study estimates that, by 2020, we will have 40+ zettabytes of data. IDC also predicts that by 2020, about 10 percent of the world’s data will be produced by edge devices. Unprecedented and massive data collection, storage, and intelligence needs will drive a major demand for speed at the edge. Services need to be connected to clients, whether human or machine, with very low latency, yet must retain the ability to provide a holistic intelligence. In 2016, the expansion of the cloud – moving a part of cloud capabilities to the edge – will happen.

Because of the invention of micro services, containers, and APIs, it is easy to run these smaller, self-contained, purpose-driven services that specifically target only certain functions that are needed at the edge. The ability to use containers for mobility and the massive adoption of Linux will enable much thicker, monolithic services previously running centralized to be “re-shaped” into a collection of smaller, purpose-driven micro services.  Each of these can be deployed and run on the edge as needed and on-demand. Spark is an excellent example of this because it is focused on real-time streaming analytics, which is a natural “edge service.”

M2M Communications Will Move to the Main Stage

The proliferation of billions of smart devices around the edge will drive direct machine-to-machine (M2M) communications instead of the centralized communication model. The majority of the IoT interactions are still about humans (such as the concept of the quantified self) and a human element also is involved in the decision making somewhere, even if it is not about quantified self.

We predict that the authoritative decision making source will begin moving slowly toward machines. This will be enabled by the M2M interactions. The emergence of cognitive intelligence themes (such as IBM Watson) and machine-learning concepts (such as BigML) will drive this adoption. Currently, trust and security are major factors preventing this from happening on a large scale. By enabling a confidence-score-based authoritative source, we can eliminate the human involvement and the ambiguity in decision making. This will enable autonomous M2M communication, interaction, decision making, intelligence, and data sharing, which will lead to replication of intelligence for quicker localized decisions. In addition, when there is a dispute, the central authoritative source, with cognitive powers, can step in to resolve the issues and make the process smoother – without the need for human intervention.

This centralized cognitive intelligence also can manage the devices, secure them, and maintain their trust. It can help eliminate rogue devices from the mix, give a lower rating to untrusted devices, eliminate the data sent by breached devices, and give a lower score to the devices in the wild versus a higher score to the devices maintained by trusted parties.

Smart Contracts to Enable Smarter Commerce

Another trend that is gaining a lot of traction is smart, automated commerce. Even though the edge devices are growing to billions in numbers, the monetization of those devices are still sporadic. There is no consistent way to commercialize those edge IoT devices. This is where the Blockchain concept can help. The edge IoT devices can create smart contracts and publish their details – such as pricing, terms, length, delivery mechanisms, and payment terms – to the Blockchain network. The data consumer can browse the list of published smart contracts, choose a good match, and auto-negotiate the contract. Once the terms are agreed upon and the electronic agreement is signed, the data supplier can start delivering the goods to the consumer and get paid from the source automatically. The lack of a need for human intervention will make commerce faster and smarter. This automation also gives an option to the data consumer to evaluate the value of data being received on a constant basis. Re-negotiation or cancellation of the contract at any time without a longer time binding makes smart contracts more attractive. On the flip side, the data provider also can choose to cancel or re-negotiate the contract, based on contract violation, market demand, deemed usage, etc.

Another important aspect of edge IT and edge processing, which includes IoT and Fog computing, is about monetization and commercialization. Currently, most IoT companies are popularizing their IoT gadget and solution set based on how innovative they are. The commercialization aspect of the gadgets themselves is very limited, however, and will not deliver the true IoT concept. Once companies figure out the value of their data, offering Data as a Service or even Data Insights as a Service will become more popular. Once this happens, we predict that companies will rush to maintain infrastructure to create an open source economy, in which data and data-based insights can be easily produced and sold.

FACTS-based Smarter Systems Finally Come to Fruition

IoT helps bridge the gap between IT and operations technologies (OT). Currently, most of the core IT decisions about OT are based either on old data (data that is more than seconds old) or on some estimation. The current decisions in the field regarding OT are made based on isolated data sets that are partial in nature and delayed. This leads to subjective decisions.

Going forward, based on growing decentralization of cloud and M2M communications, as well as real-time interaction of the OT data set with the core IT, decisions will be made closer to full ecosystem based real-time data. This will lead to objective decisions. These fast, accurate, complete, trusted, scalable (FACTS) real-time systems will make core IT business decisions in real time and enforce them at the OT level. As discussed above, Apache Spark allows the necessary services such as analytics, data intelligence, security, and privacy all to be containerized and moved closer to the edge instead of centralized processing. This allows for the edges not only to make decisions based on the events happening elsewhere in the enterprise, but also to make decisions faster, more complete, and accurate, all the time.

Andy Thurai is Program Director for API, IoT, and Connected Cloud with IBM, where he is responsible for solutionizing, strategizing, evangelizing, and providing thought leadership for those technologies. Prior to this role, he held technology, architecture leadership, and executive positions with Intel, Nortel, BMC, CSC, and L-1 Identity Solutions.

You can find more of his thoughts at www.thurai.net/blog or follow him on Twitter @AndyThurai.

Mac Devine has 25 years of experience with networking and virtualization. He became an IBM Master Inventor in 2006, an IBM Distinguished Engineer in 2008, and has been a Member of the IBM Academy of Technology since 2009.

Mac currently serves as Vice President of SDN Cloud Services, CTO of IBM Cloud Services Division, and as a faculty member for the Cloud and Internet-of-Things expos. He also is a member of the Data Informed Board of Advisers.

Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise.







Tags: , , , , , , ,

One Comment

  1. Jeff Rutherford
    Posted January 6, 2016 at 4:05 pm | Permalink

    You’re right that 2016 will see IoT gaining more traction as more and more sensors and devices are deployed and connected.

    As enterprises begin adopting the IoT, IT departments will need to decide where that IoT data will reside. Analysts are still split. Some believe IoT data will reside in the cloud, while others predict “fog” computing (which you mentioned) – the IoT data will reside on the edge of the corporate data center infrastructure.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>