As famed American singer songwriter Bob Dylan once said, “I accept chaos. I’m not sure whether it accepts me.” There’s no question that businesses today are faced with chaotic data environments. Instinct might tell you to run, but embracing the chaos isn’t as scary as it may seem and can open up unprecedented opportunities. The key is creating a system in which we are able to work through the chaos to rationalize trends within the data and determine next steps with confidence.
A universal data framework enables unconstrained access to data across the enterprise, creating a logical data fabric for the organization. This data fabric supplies access to all data, allowing users to place data exactly where they need it. For data to be truly universal, however, organizations must ensure they have all the necessary elements to build a universal data solution. These are the seven major laws to follow when building a universal data solution.
Performance is fundamental. Enterprises need an infrastructure to deliver data at the speed their users require and at the speed the business demands. Even in 2017, many organizations still hear the complaint: “I can’t get access to the data when I want it, where I want it.” Just as a universal data solution must eliminate the latency that results from dividing data into separate environments or tasks, it must also deliver the performance demanded by all dependent users and processes. Without performance, none of the rest really matters.
Data cannot be constrained. In the modern world of digital transformation and Industry 4.0, data is everywhere. Data exists both inside the four walls of the enterprise and outside those walls. Data can be found on your premises and in the cloud. Users and systems generate data, and various third parties also deliver it. Each of these different kinds of data must flow seamlessly – in and out. The ability to innovate and to enable rapid growth depends on this freedom. According to recent survey data from SAP, 85 percent of those surveyed say they struggle with data from a variety of locations, whereas 72 percent say their data landscape is complex due to the variety and number of data sources. These data points illustrate the growing need for businesses to eliminate silos, ensuring data from all sources is able to travel wherever it needs to go.
Having access to all the data is just the beginning. Enterprises must be able to find and understand all the data within their organizations to model it to their specific business requirements. In the era of big data, data discovery has become a critical aspect of the data management process because of the need to find relationships and insights hidden within the data. Being able to develop deep and accurate models quickly ensures that the enterprise can operate flexibly, changing direction as new discoveries are made.
It’s very simple: your computing power must not limit your data, and your data must not limit your computing power. Each must be able to scale independently. Consider the spread between them to ensure that data remains independent and elastic, while achieving this with low latency. After all, yesterday’s data is often 24 hours too late.
- Low Latency
The need for low-latency operations (the gap between when data is recorded and when it’s needed again) goes hand in hand with the need for high-performance standards. Users, systems, processes, and customers are no longer willing to wait for data. The pace of modern business is too brisk. Data must be available as soon as it is created. Data also must be ready for incorporation into transactions, automated processes, or analysis as needed.
Unsecured data is at risk for theft and tampering. At this point, it is a table-stakes requirement that data be secured. And in a rapidly changing environment, it is important that data remain secure. But beyond security, it is critical that data availability and usability are ensured. It is important that organizations define standards for data integrity (which may vary between use cases and modes) and ensure that those standards are met for all relevant users and processes.
Today’s businesses are challenged to operate in different ways at the same time. Complex processes require mashing up structured data from mode 1 with unstructured data from mode 2. Moreover, critical new sources of data can arrive at any time. These must interoperate seamlessly. The enterprise data fabric must act as a true universal database, bringing all these different data sources together as the user needs them without exposing the underlying complexity of the differences between various data types.
Data has become the new oil – there’s value in it, but it needs refining. The onus is on us to determine what steps we can take to enable the data to do virtually anything we want. When you incorporate the principles into your data framework, you’ll unlock a wealth of valuable business insights and stop looking at data as chaotic, but rather a tremendous resource.
Greg McStravick is the President of Database and Data Management at SAP, leading development and go-to-market teams for SAP’s core digital innovation platform — SAP HANA — as well as Sybase databases (including ASE and IQ), EIM, Middleware, and HANA Vora. Formerly, Greg led the go-to-market teams and strategy for some of SAP’s largest and fastest growing businesses including SAP HANA Platform, Analytics, Database, and HANA Cloud Platform. With more than 20 years of progressive experience as a leader in technology solution sales management and strategy, Greg has held senior leadership positions throughout SAP, including President, U.S., where he was responsible for driving customer success and developing new opportunities for SAP to expand its business across the entire U.S. region.