The Internet of Things and the Necessity of Fog Computing

by   |   March 10, 2016 2:30 pm   |   0 Comments

The Internet of Things (IoT) is arguably the ultimate expression of big data. Conservative estimates anticipate as many as 22 billion connected devices by the end of the decade – all constantly generating semi-structured or unstructured data in real time. Methods of processing, analyzing, and deriving action from that data will be markedly different than conventional methods currently deployed by most centralized data centers.

Traditionally, copious amounts of big data have been best managed in the cloud, as organizations take advantage of cloud’s scalability, abundant storage, flexible pricing, reduced physical infrastructure, and elastic computing capabilities. Nonetheless, the sheer amounts of disparate data facilitated by the IoT are projected to create immense strain on organizational bandwidth and network availability, which can create instances of failure and delay on time-sensitive data.

In the wake of such predictions, a new paradigm has emerged to account for the massive amounts of data that mobile computing, the IoT, and big data are producing. By utilizing a decentralized cloud model referred to as fog computing or edge computing, organizations can realize decreased time to action; reduced costs, infrastructure and bandwidth; as well as greater access to data. 

Organizational Benefits

Related Stories

Cloud Computing Moves to the Edge in 2016.
Read the story »

Prepare Now for the IoT Revolution.
Read the story »

Analytics in the Cloud: Find the Best Fit [Podcast].
Read the story »

Clearing the Air Around Cloud Computing.
Read the story »

The advantages of the decentralized method of fog computing and IoT analytics extend to both the enterprise and end users. Organizations and those operating in data centers benefit because the majority of computations are performed at the edge of the cloud and closer to the mobile device. Instead of the massive amounts of big data produced by an equipment asset in the industrial Internet continuously being transmitted to a data center, which requires enormous quantities of data, fog computing enables only the results of data computations or analytics that pertain to asset management to be transmitted to the center.

Thus, 90 percent of the data that is transmitted and indicates that the asset is functioning properly is processed at the source and no longer requires any movement. The 10 percent that reveals that an asset is malfunctioning or in need of preventative maintenance is all that is transmitted. This greatly decreases network strain and time to action, and organizations don’t need to increase their physical infrastructure and network capacity and can maintain sufficient network availability even with analytics for the IoT. 

End-user Benefits 

This approach provides a win both for those monitoring data transmitted by the IoT and mobile computing methods, and for those depending on that data. By facilitating computations near the edge of the cloud and closer to the source of the data, fog computing enables the devices and end users that need the results of those calculations to get them much more quickly than they otherwise could. There is no need to wait for untold amounts of data to travel across the country (or even across the world), to perform analytics at a centralized data center, or to hope the network’s availability remains consistently operable. Instead, only the results of analytics undergo that process. And in some instances, everything is processed to produce action at the edge of the cloud by the device itself. The decreased time to action and greater availability of this paradigm can reinforce the trends of mobile computing and the IoT, helping them to gain further traction while satisfying end users in a way that is much more expensive and tenuous to achieve with centralized cloud approaches.

Concerns

Fog computing, however, is far from a panacea. One of the immediate costs associated with this method pertains to equipping end devices with the necessary hardware to perform calculations remotely and independent of centralized data centers. Some vendors, however, are in the process of perfecting technologies for that purpose. The tradeoff is that by investing in such solutions immediately, organizations will avoid frequently updating their infrastructure and networks to deal with ever increasing data amounts as the IoT expands.

Although cloud security has made considerable strides in recent years, organizations and service providers will have to adjust those models to focus more on end-point devices with fog computing.

Additionally, there are certain data types that actually benefit from centralized models. Data that carries the utmost security concerns, for example, will require the secure advantages of a centralized approach or one that continues to rely solely on physical infrastructure.

“One of the benefits of centralization is that you can focus your efforts and understand where data is and who has access to it,” said Jack Norris,  SVP, data and applications at MapR. “So in a way, it can simplify some of the aspects of protecting that information.” Data that requires a high degree of complexity for its queries also would benefit from the traditional centralized model. In general, data that merely requires network availability and celerity is best suited for the decentralized paradigm.

Resource Allocation for the IoT

The Internet of Things is already a reality. This application of big data is in the process of broadening and continually incorporating new devices to generate ever more amounts of data. Fog computing is a way of accounting for the future of the IoT and the cloud so that organizations take a more prudent approach to resource allocations. The centralized paradigm can still work, yet will require constant additional resources, upgrades, networking investments, etc. The decentralized method, however, is better aligned with the flexibility and agility that tends to characterize the more prevalent data management trends and applications today. Ultimately, the latter is much more sustainable than the former.

Jelani Harper has written extensively about numerous facets of data management for the past several years. His many areas of specialization include semantics, big data, and data governance.

Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise.


The Critical Moment: Getting Operational Intelligence from Logs, Metrics, and Transactions




Tags: , , , , , ,

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>