(and the best features to look for if you’re in the midst of a storage upgrade now)

Enterprises today have a lot on their plates when it comes to managing storage infrastructure and the critical data that lives within it. They must provision capacity, optimize performance, and manage backups – all while figuring out how to cope with constantly expanding data volumes. While the day-to-day tasks are many, every so often a strategic initiative comes along that calls for new capabilities that aren’t available on an enterprise’s existing infrastructure. That’s when things get really interesting – when storage pros can take a step back from their daily routines and kick the tires on some cool new technologies.

Having worked with hundreds of enterprises that have found themselves facing the need to update their storage solutions, Veristor is in a unique position to highlight some of the most common initiatives that are driving storage pros to make a technology change. And since we spend a lot of time evaluating new storage solutions ourselves, we’ve learned a thing or two about the features and capabilities that are best suited to each of these initiatives. This blog outlines the five major drivers of a storage upgrade and highlights the features you should consider for each of them.

Replacing Outdated Storage Infrastructure

One reason enterprises undergo a storage upgrade is quite simple: their existing infrastructure has aged well beyond typical refresh cycles to the point that it no longer supports the business’ needs. In this case, some organizations may notice that an application, operating system, or other platform in the IT stack no longer supports the storage device or that performance demands have surpassed the device’s capabilities. Whatever the cause, when a storage platform reaches the end of its useful life, it’s going to trigger a search for what’s next.

For enterprises that are about to say goodbye to the old and hello to the new, there are a few things to keep in mind. First, your exact feature set is going to be driven by business needs, application workloads, and price tolerance. That’s why it’s a good idea to document your requirements when it comes to performance, capacity, and features. It’s also wise to set a budget. But beyond the basics, there’s one piece of advice we find ourselves repeating to storage pros who are ready for a refresh: focus on the features that make a storage platform better at managing where data lives.

When a storage device has the ability to sense the performance dynamics associated with a certain data set, and then move it to the best media based on its needs, it can improve both efficiency and performance. With this feature, for example, the array can determine that a set of financial data is being accessed frequently and processed heavily, then move it from spinning disk to flash storage (or even RAM) for better performance. Unlike tiering, this intelligent feature is always on, working in real time to be sure data is located on the best media. Not only can this significantly reduce the time it takes to manage data, it can seriously lower storage costs by preventing performance overkill.

Supporting Unique Application Requirements

Sometimes a business wants to roll out an application that comes with some very specific requirements – and its existing storage systems don’t support it. We see this a lot with workloads that have stringent performance needs or are heavily analytics-based. In both cases, enterprises should look for a few key features as they evaluate new storage solutions to be sure these applications are getting what they need.

Performance-intensive workloads, such as those that power equities trading or banking transactions, can put tremendous pressure on the storage infrastructure. And because microsecond slowdowns can cost thousands in the financial world, it’s critical to choose an array that’s up to the task. In our experience, all-flash arrays (AFAs) are a no-brainer for supporting performance-intensive applications. Not only do they come with the obvious benefit of extremely fast read and write capabilities, the best AFAs have value-added features:

  • Intelligent data reduction tools such as deduplication and compression – to improve storage efficiency
  • NVMe – which can take full advantage of flash media and really accelerate data access
  • Infrastructure as code – to speed provisioning and help development get more agile

As for analytics-based application workloads, which help businesses derive insights from large pools of data, you can’t go wrong with a high-performance AFA. But perhaps the most important feature to look for here is cloud connectivity. That’s because analytics workloads are notoriously transient, with peaks of intense requirements followed by valleys of inactivity. Purchasing and managing dedicated on-site storage for the peaks can be extremely costly and seriously inefficient in the valleys. If your new array features cloud connectivity, you can build an infrastructure around a moderate need and burst into the cloud at times where additional resources are needed. The key thing to look for here is an open approach, which can help you avoid being locked into a specific cloud provider, and the simplicity with which data can be delivered from the array to the cloud.

Adopting a Cloud-like Approach

One of the most interesting initiatives that’s changing how storage pros evaluate new technologies is the desire to adopt a cloud-like operating model within the data center. The cloud has raised the bar when it comes to speed of provisioning, simplicity of management, and visibility into infrastructure costs, and more and more IT pros want to emulate that experience for their internal clients. In many cases, that’s leading them to think differently about what’s most important in a storage platform.

When an IT organization makes a commitment to move toward a cloud-like operating model, it typically stems from an interest in simplifying. While it may seem basic, the first thing we look at here is the management platform. We love interfaces that automate as many tasks as possible, and where everything is easy to find. The more time we can save for staff, the faster everything gets. We also like self-diagnosing and self-healing capabilities that speed up troubleshooting when something goes wrong. Some storage platforms come with advanced monitoring and analytics services, beyond typical phone-home features, that are delivered by the manufacturer, so someone is always on the lookout for signs of trouble and ready to help fix small issues before they grow or spread. To us, the first step to getting more cloud-like is to simplify those routine storage management tasks as much as possible.

From there, we look at features that can help make the storage environment operate a lot more like the cloud. Infrastructure as code comes to mind, of course, so developers can easily self-provision the resources they need right within their applications. Intelligent APIs are really important here, too, helping to link storage infrastructure closely with the rest of the IT stack and uniting on-prem resources with your cloud platform of choice, too. By seeking out open solutions, storage professionals might even be able to one-up the cloud by preventing their users from being locked in to a given platform.

Re-evaluating the Cost Structures of IT

Here’s a trend that we’ve seen quite a bit lately, and we’re pretty sure it also stems from the cloud. Organizations are beginning to shift the way they think about their investments in IT from today’s capital-first model to more of an operational-expense structure. The cloud likely set this trend in motion with its predictable (albeit expensive) usage-based pricing. But it’s also the result of accelerating innovation, which can reduce the technology refresh cycle and cut into the ROI for infrastructure purchases. Whatever the cause, a shift to an operational expense structure is leading some organizations to think about their storage purchases in a new way.

One of the most exciting “features” that applies here is more of a vendor business model than a feature – the ability to provision storage capacity on demand. Some forward-thinking storage manufacturers are offering arrays that come pre-loaded with a massive amount of capacity, but only charge enterprises based on their actual utilization. This lowers the initial investment for storage and helps businesses keep their costs well aligned with their needs. Because new capacity can be provisioned fast, this can also make the on-prem infrastructure feel a lot more like the cloud.

Speaking of the cloud, another feature that can help businesses shift to an operational cost structure is the ability to easily connect an array with the cloud. This helps businesses tap into cloud storage as needed, rather than deploying expensive hardware that isn’t fully utilized. A good way to measure an array’s ability to connect to the cloud is to dive deep into its APIs. But also look for advanced features that allow you to burst to public cloud resources, perform backups to the cloud, and recover that data easily.

Regular (Lifecycle) Storage Refresh

Perhaps the most common initiative that’s driving storage purchase decisions today is the regularly scheduled storage update. Whether they’re driven by the allure of new features or an aversion to the growing maintenance expenses that come with aging equipment, most enterprises make a plan when they purchase storage hardware to retire it within three to five years.

A storage refresh is a great opportunity to get into the latest feature sets and technologies. Keep an eye out for an all-around solid array built around flash storage and complemented with advanced features that can make the most of it. These include intelligent data management, data reduction techniques, and NVMe. Also look for streamlined management and simplified cloud connectivity. Finally, if you’re factoring in disaster recovery (DR), keep an eye out for an array that can run in an active-active state. This allows you to utilize the resources in your DR environment to support your business, rather than having them sit idle until an outage occurs.

Conclusion

There are quite a few initiatives that are driving enterprises to evaluate and ultimately purchase new storage devices. Some are tied to the regular course of business, while others are bound to unique application requirements. In a few cases, storage purchases are even wrapped into strategic technology initiatives or high-level business transformation. No matter what is driving your next move in storage, it is critical to keep a watchful eye on the new features that are emerging to help your every initiative succeed.

No matter what your objective is, Veristor can help you decide on the best storage features to help you achieve it. Learn more at https://veristor.com/datacenter/enterprise-storage/