Historically, IT departments protected data in one of two ways. First, through using backups, and second through using highly available system components and applying best practices, a method usually implemented by larger organizations.
Snapshots enter the picture
Things started to change in the 1990s when snapshot technology, popularized by NetApp, was developed. The ability to take snapshots blurred the line between backups and highly available systems for data recovery, and many of the more sophisticated storage products were quick to adopt it. In the event of a data problem such as a virus attack, a software bug, or any other event that corrupted business data, IT departments now had a tool that allowed them to go back and recover most of the data lost.
The business benefit of using snapshots is easily articulated. Let’s say, for example, a data corruption took place at 2:37 p.m. during the business day. If hourly snapshots are implemented, the company would only stand to lose about 37 minutes, or 1 hour and 37 minutes of data at most. Without snapshots, they could lose the data for that entire business day, and only recover data to the end of the last business day. And that’s after several hours of restoring data from their backups. In an organization with hundreds, if not thousands, of employees, the technology was fairly simple to justify.
When snapshots are no longer a “snap”
As with most things, however, the process did not remain that simple. Companies evolve over time, and when they do, their applications and data storage needs change as well. As a result, for many companies numerous applications now store data on multiple volumes, requiring more coordination to ensure that the correct application data was “snapped” at the same time.
This problem is compounded further by the fact that applications are feeding one another and storing different information on different data stores. Consider the example of an online standalone customer order system that uses an open source database, and feeds some of the order information directly to an ERP system using Oracle. Now the application managers, database administrators, and storage administrators need to understand what data is stored where, and how different pieces of information are tied together. They also need to know where all the various types of data are located at the time when snapshots are taken.
Some expensive storage systems attempted to make things less complicated by introducing features like data volume consistency groups for snapshots. But even minor changes had to be painstakingly coordinated to ensure that these would continue to work. Clearly, a simple solution had become a cumbersome, difficult to manage process that entailed a lot of resources.
Data recovery in today’s business landscape
Not surprisingly, many organizations have concluded that in addition to being resource intensive, the use of snapshots cannot guarantee same-day data recovery. While many large enterprises continue to spend time and resources implementing snapshot-based backups to varying degrees of success, most small to medium-sized business no longer bother because they simply do not have the means, resources, or skills to keep up.
Organizations big and small today have to constantly deal with the increasing risk of data corruption and data loss due to issues such as application bugs and database crashes, or security threats from various malware such as viruses and ransomware. With ransomware incidents happening more frequently than ever before, and the cost for businesses averaging about $300,000 according to the FBI, the importance of being able to recover same-day data quickly and efficiently can never be overemphasized. Businesses that have the right systems in place stand a better chance of remaining competitive.
Therefore, organizations must adopt new processes and innovations that embrace simplicity and reject the complexity of antiquated snapshot technologies. Technologies that enable administrators to recover to a time period before a ransomware issue or a database problem. Technologies where everything is done automatically by the storage system – where there is no need to coordinate and map data for recovery between different application groups, database administrators, and IT infrastructure personnel.
By doing so, businesses will ensure a future where IT departments are able to deliver the value the business needs, while giving them more time to improve IT services. No more spending inordinate amounts of time on unnecessarily complex IT processes such as implementing snapshots and consistency group management.
Jacob is the Vice President Product Strategy and Product Management at Reduxio. His role defines product vision and strategy for the data storage company. Reduxio builds a next-generation storage platform that provides breakthrough capacity savings and infinite data recoverability through unique real-time primary storage deduplication and protection technologies called NoDup™ and Backdating™.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise.