The lives of IT professionals are complex. They typically have to balance many projects, keep core systems up and running, and do it all with limited budgets.
Adding to this complexity are several challenges that have been growing over the past several years:
- The pace of new technology adoption
- Ability to handle massive data growth
- The need to update, or even replace, unreliable or under-performing storage systems
- Unexpected downtime and performance slowdowns due to operational complexity
How can IT possibly keep up with the fast-moving state of technology and gain control over these issues? The first step is understanding exactly what the core problem is.
It seems unbelievable when you see it in print, but a full 25 to 30 percent of IT budgets are spent on improving and growing storage. With this level of resources dedicated to the problem, people and system resources have to be optimized, first and foremost, before additional storage is added. Put in other words, efficiency is critical and can save a company from wasting unnecessary money and time on storage infrastructure and other purchases that may not be required.
The solution? Analyzing your application workload I/O profiles to know for sure whether new storage is needed, what type is needed, whether additional efficiencies can be gained, or whether the solution involved deployment in a cloud. To get started, here are four important things to consider before undertaking any storage upgrade project:
Understand your application workload profiles.
Every IT environment is different, but often, the IT purchasing process relies mostly on input from vendors or best guesses about a company’s performance requirements. To avoid getting stuck with an over-provisioned or under-provisioned solution that doesn’t quite fit, you have workload data and analyze exactly what your company needs. IT professionals need to understand how their application workloads interact with underlying storage infrastructure.
How do you determine what your application workload profiles are and exactly what your company needs? Workload analysis, modeling and testing. There are some new breed of tools that can analyze your production workloads in a vendor-agnostic way. With this data, you can run simulated workloads in a test environment using a variety of conditions and situations to learn how changing workloads will affect your infrastructure performance.
For the actual testing, nearly all companies will get more realistic results by avoiding third-party benchmarks – for example, those available from the Storage Performance Council (SPC and SPEC). Since these benchmarks are run under what would be considered ideal situations, they depict best-case results, not what you would see in your real-world environment. More importantly, these generic benchmarks don’t represent your specific application workloads.
Tools like workload modeling software and workload generators are purposely-designed for performing this analysis. The tools out there today are scalable and capable of representing 99%+ realistic situations. This provides a level of insight into workload behavior that was impossible until recently.
The step after running these complex tests is, of course, analyzing the results. Evaluate your infrastructure performance under all those “worse case” conditions. This will give you a good idea of where the breaking points are, where efficiency can be improved and where you may need more resources.
During this process, it’s important to go beyond the basics and test for two IT constants: change and growth. Don’t depend on your current situation for testing scenarios. You have to anticipate potential short-term and long-term changes, anywhere from tomorrow to five years from now. Some common environment changes that have to be considered are device firmware updates, new network interface technologies, and application software changes, media additions (such as flash), and new features that may be added such as compression and deduplication. Workload analysis tools can help in this phase, as well, enabling you to add new scenarios to your tests, and even automatically testing scenarios that your team might not anticipate.
Share your findings.
You’ve run your tests and analyzed your results. At this point, you have a pretty good feel for what you need now and what you might need in the future. But it’s important to also share your insights with others in IT. By sharing your results in an IT community, you can gain insights into issues you may not be aware of because you’re too close to the situation. Likewise, participating in a community and reviewing others’ results can give you critical insights into your own environment.
The benefits of following this process are unbeatable. The obvious part is that you will have access to metrics that allow complete transparency for tracking and predicting IT infrastructure performance, availability and utilization. This is valuable, for sure, but leads to higher-level benefits, like reduced risk, greater stability and the peace of mind that comes with predictable performance. Ultimately, this means having more time for your IT teams to focus on innovation and new projects that help the company succeed.
Len Rosenthal is chief marketing officer at Virtual Instruments. With more than 30 years of experience at leading public and privately held IT infrastructure companies, Mr. Rosenthal has held executive and senior positions at Load DynamiX, Panasas, Qlogic, Inktomi, HP and more.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise.