Cutting the Cost of Big Data Value

by   |   October 11, 2016 5:30 am   |   0 Comments

Simon Moss, CEO, Pneuron

Simon Moss, CEO, Pneuron

The opportunity to harness big data to better engage existing and potential customers and to inform product development looms larger with every passing year. Many companies are beginning to benefit from investments in data acquisition, management, and analytics with improved operational efficiencies. About two-thirds of the executives report that big data and analytics initiatives have had a significant measurable impact on revenues, and 60 percent attest to a significant impact on costs as well.

Creating connections that produce actionable insights is driving revenue, but at what cost? As Claudia Imhoff, CEO of Intelligent Solutions, suggests in the same report, “Part of the problem with big data is that we have become so enamored with the technology, we’ve forgotten what business problems we’re trying to solve with it,” going on to suggest that corporate bean counters “are looking at all of the money being poured into big data environments and not seeing a whole lot of value coming out of them.”

If we drill deeper, we find that more than half of the respondents in the above-referenced survey who are reporting increased revenues and lower costs are describing an impact of less than 3 percent. The trouble is that big data tends to cost big money. Weighed against the many millions being spent, the return on investment is not so clear cut.

Broken Promises

Too many companies are being swept up in the wave of big data hype. They are failing to base their choice of big data solutions on real business cases, assuming that benefits will naturally follow the adoption of new technology. They are being persuaded that it’s worth committing $100 million to a five-year project that’s going to homogenize all of their data and involve thousands of people.

What they don’t realize is that the project has a very low chance of success, that they are locking themselves into a vendor’s roadmap, and that, in the best-case scenario, any potential business value is far beyond the horizon. The bulk of the problem here lies in the approach. Analytics can demonstrably extract real value from data, but the traditional, dated approach to preparing and accessing that data is fundamentally flawed.

Solving the Wrong Problem

The majority of big data solutions try to impose an abstract and alien data model across a massively heterogeneous data environment, which is growing more complex and distributed by the day. At best, this is an incredibly ambitious task, with costs that could eclipse the benefits. At worst, it’s simply impossible.

Consider for a moment that the real problem is variety and not volume. Integrating data from new sources and legacy systems was named by 40 percent of Fortune 1000 firms as the primary technical driver behind big data investments, according to New Vantage Partners Big Data Executive Survey 2016. Only 14.5 percent of respondents answered volume, and velocity was third, with 3.6 percent.

Why then are we starting with the supposition that volume is the main problem? This idea that all the data must be centralized and normalized is actually creating a big data problem, not solving one.

If we accept that diversity and distribution is the real issue here, perhaps we can approach the problem from a different angle. Let’s recognize that for financial institutions, pharmaceutical companies, insurance carriers, and many other industries, the situation is growing more complicated all the time. It’s important that companies are free to integrate new technologies and pivot quickly for competitive advantage, so data is going to get more distributed and sources are going to get more and more diverse.

Reversing the Paradigm

The lack of predictability, the opacity, and the cost of homogenization inherent in traditional big data solutions is no longer acceptable. It simply doesn’t make business sense. We need to find a way to take the analytics to the data and not the other way round. We need to find a way to extract value directly from our existing applications, databases, processes, and operating models.

If we take the time to understand the problem we are really trying to solve, it’s clear that the real barrier is that the components of the solution are unbelievably distributed and incredibly diverse. Instead of making this into a big data problem by creating a single, large volume of data, let’s move the analytics directly to the native data sources, orchestrating and creating the answers in real time with a non-invasive, high-performance, event-driven fabric that can grow to cover expanding business.

We can drastically cut the cost and time it currently takes to see a return on big data investments if we pause to question the premise of the foundation upon which we are building.

For further reading on this subject by the author: “Do you have a big data issue or is it a data diversity issue?”; And: “Heterogeneous analytics and making sandwiches.”

 

Simon Moss is Chief Executive Officer for Pneuron Corporation, a business orchestration software provider. He was previously CEO of Avistar and CEO at Mantas, later acquired by Oracle. He served as Partner at Price Waterhouse Coopers, and was co-Founder of the Risk Management Services Practice at IBM. Moss is also on the Board of Directors for C6 Intelligence. Contact him at simon@pneuron.com.

 

Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise.

Tags: , , ,

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>