How to Address Common Big Data Pain Points

by   |   May 4, 2015 5:30 am   |   0 Comments

Big data presents businesses with tremendous volume and business complexity. It comes in different forms – structured, semi-structured, and unstructured – and from any number of varied sources. And from this huge, constantly growing amalgam of data types and sources, businesses must extract quick, accurate, and actionable insights.

Naturally, this process also presents businesses with several challenges and pain points, the biggest of which I discuss below.

Handling Huge Data Volume in Less Time

More than 2.5 quintillion bytes of data are created on a daily basis, from sensors, social media, transaction-based data, mobile devices, and many more sources. Nearly every organization is overloaded with data.

To make critical business decisions quickly, companies need a resilient IT infrastructure that is capable of reading the data faster and delivering real-time insights.

When talking about handling complex data, Apache Hadoop comes to mind. MapReduce breaks applications into smaller fragments, each of which is then executed on a single node within a cluster. However, Hadoop presents challenges with scheduling, resource sharing, cluster management, and data sharing. To solve this problem, there are a lot of standard commercial packages available on the market, including IBM InfoSphere BigInsights, Cloudera, and Hortonworks that are capable of handling all these issues and doing parallel processing smoothly.

Cleaning and Formatting Data to Get Meaningful Output

Data cleaning, sometimes referred to as data cleansing, is an integral part of data analysis. In fact, it takes more time to clean the data than to perform statistical analysis on it. The process of data cleaning and statistical analysis includes the following steps:

Data cleaning and analysis steps. Click to enlarge.

Data cleaning and analysis steps. Click to enlarge.

 

Raw data: This is the data as it comes in. In this state, the data might not have the appropriate headers, might have incorrect data types, or might contain unknown or unwanted character encoding.

Technically correct data: Once the raw data are modified to get rid of the above discrepancies, it is considered technically correct. The data now have appropriate headers and the correct character encoding.

Consistent data: In this stage, data are ready to be exposed to any sort of statistical analysis. Data at this stage can be used as a starting point for analytics.

Related Stories

5 Challenges of Moving from Data to Value.
Read the story »

Survey Highlights Challenges of Putting Data into Production.
Read the story »

Hadoop Survey Offers Insight into Investment, Adoption.
Read the story »

Collaboratory Seeks Solutions to Big Data Challenges.
Read the story »

Statistical results and output: After the statistical results are obtained, they can be stored and reused. Also, these results can be formatted and used for publishing any kind of report. Several extract, transform, and load (ETL) tools are available for performing the data cleaning and formatting tasks. After that, the cleaned data can be used as an input to the processing system. But the formatting and cleaning parameters must be configured properly to get the expected cleaned input.

Representing Data in a Visual Format

Representing unstructured data in a visual format that is readable and understandable to the audience is a challenge that organizations working with big data are going to face as their ability to analyze their unstructured data grows.

Visualizations such as graphs and tables can be used to represent the data. But different types of visualizations work best with different types of data. For example, categorical data are best represented by bar charts, line graphs, or pie charts. Continuous data are best represented with histograms.

Making the Application Scalable

As the volume of data grows by the day, scalability becomes a problem. A common challenge to scalability while collecting data is that data services are deployed on multiple technological stacks: Apache/PHP for the front end and programming languages such as Java or Scala being used to interact with the database or the front end. These application scalability issues can be resolved by using proper clustering. The data is always distributed on a Hadoop file system and the processing is also distributed, so scalability at this area is not an issue. It is scalable by default.

Selecting the Appropriate Tool for Data Analysis

No matter which approach organizations take to collect and store data, if they don’t have an appropriate tool for analysis, it’s of no use to have these things in place. Be very careful when selecting the tool for data analysis. Once selected, it can’t be changed very often. It is difficult to move an application from one tool to another. So while selecting the tool for analysis, keep the following considerations in mind:

    • Volume of the data

 

    • Volume of transaction

 

  • Legacy data management and applications

 

Volume of data and transactions can be handled by any Hadoop- based tool, such as IBM InfoSphere BigInsights, Cloudera, etc. The legacy data need to be formatted properly to suit the input requirements of the analytics tool. Otherwise, the tool will not be able to analyze it properly.

Deployment in the Production

Many well-developed applications go to waste because the deployment process is not smooth. The deployment process includes integration of the new system with the existing production system. Many enterprise applications and dashboards like to make queries directly to the application. Select a tool that can handle these types of queries in an efficient manner.

Being aware of and addressing these pain points at the outset of any big data initiative can provide organizations with realistic expectations regarding their big data programs and allow them to realize meaningful insights and deliver value and ROI to the bottom line much more efficiently.

Kaushik Pal has more than 16 years of experience as a technical architect and software consultant in enterprise application and product development. He is interested in new technology and innovation areas, as well as technical writing. His main focus area is web architecture, web technologies, Java/J2EE, Open source, big data, cloud, and mobile technologies. You can find more of his work at www.techalpine.comEmail him at techalpineit@gmail.com.




Subscribe to Data Informed
for the latest information and news on big data and analytics for the enterprise.










Tags: , , , ,

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>