With the warmer-than-usual temperatures around most of the country, I think it’s safe to say that spring is on its way. This time of year tends to get a lot of attention for spring cleaning—which can be a good reminder to clean up your databases to ensure that you have the best data for reporting and analysis.
Database maintenance should be a priority more than once a year, but with the fast pace of today’s business world, it often gets pushed aside. That being said, here are some tips for spring cleaning your databases that can be used throughout the year to ensure optimal performance and efficiencies across your production environments.
1) Take inventory of what you have.
This may be easier said than done, especially these days, when it is so easy to spin up a database. In fact, I’ve spoken with a few customers who said that their developers will find a database that works for their needs, create it, and continue to manage it on their own—housing essential data and insight into program performance that nobody understands how to use, creating a siloed system. That said, if the specific developer responsible for building the database leaves, nobody has the knowledge or understanding of what the mystery database handles or how it works.
As you can’t manage what you don’t know, having a good understanding of what databases (and types) your team and organization uses can transform your operations. Figuring out the list of databases and providers can be a little bit like playing hide-and-go-seek when you try to track down the right information, but once you have a comprehensive list, it can make all the difference when it comes to managing your databases.
2) Test your backups.
Your backups are running, right? Good. When was the last time you tested them?
Earlier this year, GitLab suffered a database loss. While they had multiple backup processes in place, they were unable to recover a lot of data (you can read the full post-mortem on their blog here). Like GitLab showed us, even if you have backups in place, sometimes they can’t save you. And GitLab isn’t the first (or last) to experience this—it’s shocking how often this happens.
Running backups serves as an important first step in the process, but it’s equally as important to make sure that it works once it’s been run. After all, did a backup really happen if you never tested it to make sure?
3) Set standards and processes.
Every DBA has a different way to manage their databases and, combined with how often database needs change, your organization may have a set of standards and processes, but they may have not kept up with the database changes implemented within your organization.
Perhaps the easiest way to determine the appropriate standards moving forward comes from creating a taskforce. This way, you can ensure all teams are represented and there is insight into what is working—and not working. During this discussion, you can establish some standards and processes, like:
- Parameters for deploying a new database
- Determining which types of databases are supported by your organization
- Insight into monitoring tools and functionality
- Creating service-level agreements for defining and addressing issues
- How to sunset a database and archive data
4) Put it on auto-pilot.
Lastly, don’t let your clean database efforts go to waste: Implementing a heterogeneous database-monitoring solution can ensure that, no matter how many different databases types you currently use, your policies are followed year round and you have the visibility you need to pinpoint problems in real-time, and drive better performance across your production environments.
After all, a good database-monitoring solution does more than just keep an eye on the health and performance of a database. It allows you maintain your environment throughout the year by helping you discover those rogue databases that keep popping up, verifying backups, and enforcing your team’s agreed-upon standards.
As a result, proactive maintenance, like spending time to clean up your databases, can become a more regular activity, which drives better performance now and into the future.
Mike Kelly, Blue Medora
As CTO, Kelly is focused on advancing Blue Medora’s VMware product integrations and leading the product champion and new product development teams. Before becoming CTO, Kelly led the creation and development of a leading software solution for monitoring and managing Oracle databases on VMware. Prior to Blue Medora his career was focused on new product development and research, and includes experience at every stage of the product development cycle.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise.