The Advantages of Storage Sprawl

When IT Planners see “storage consolidation” show up on the IT whiteboard they know that a long, arduous project is ahead of them. Storage consolidation is typically the goal of a storage refresh. These refreshes used to be a once every four or five year event as storage systems reached their end-of-life, but now in the modern data center these refreshes come at a much faster pace and instead of consolidation most refreshes create more sprawl. Storage sprawl is occurring to address new expectations of performance and unexpected increases in storage capacity consumption. Armed with the right storage software and processes, IT professionals can embrace storage sprawl instead of dreading it.

Is Consolidation Dead?

Many IT pundits believe that the concept of storage consolidation is now dead. The justification for this position is that the two reasons for a refresh, additional performance or additional capacity, are mutually exclusive. Adding a storage system with expensive flash storage but relatively low capacity addresses performance, while adding a storage system with very inexpensive hard disk storage and massive amounts of space only addresses the capacity problem but few systems address both demands.

In theory a storage system could be designed that could meet both the performance and capacity demands of the data center. The real consolidation challenge, however, is timing. The data center now responds to capacity and performance concerns rapidly, using point solutions instead of waiting for budgets to align where they can buy a single large system. There is no single “clock” in the data center that strikes when it is time for a storage refresh. Instead, there are now multiple “clocks”, one for each storage system.

The Advantages of Storage Sprawl

The opposite of storage consolidation is storage sprawl, and best practice is to limit sprawl as much as possible. However, sprawl is happening in almost every data center. The demands for better performance and more capacity come in at a faster pace than ever, and meeting these demands can’t wait until the next storage refresh. IT is forced to solve the problem now. As a result, most data centers have multiple storage systems: a legacy system for their bare metal, mission critical applications, two or three systems for their virtualized environment, one or two systems for their virtual desktop environment and several to meet the capacity requirements of their unstructured data storage needs.

While a single storage system can be architected to address both performance and capacity, again, most data centers can’t wait for a storage refresh to address them with a consolidated approach. Addressing performance and capacity problems separately is in some ways easier and offers advantages to the data center. First, the organization can buy the exact type of storage solution they need at the time they need it. Second, it is likely that these focused systems will be better at addressing the problem they were designed to solve. Finally, the focused approach should lead to the purchase of systems that are less expensive, at least from a hard cost perspective.

The advantages of storage sprawl are that it enables IT to address the specific problems that they face while they are facing them. Point storage solutions are also part of smaller projects, again addressing pressing problems, making them easier to get approved. Finally, for many data centers storage sprawl may simply be a way of life at this point. No matter how good storage systems become at consolidating both performance and capacity workloads, a data center may be too far down the road to ever be able to consolidate into a single system.

The big challenge for IT is living with the downsides of storage sprawl while the organization enjoys all the advantages.

Managing Storage Sprawl

The single biggest challenge to storage sprawl is managing the disparate pieces of hardware. IT needs to learn each storage system’s feature set and decide how to take best advantage of them. They also have to work around features that a particular system may be missing. Once the capabilities and features of each system are understood, IT needs to figure out how to move data sets to the various systems. Data movement is often a manual process and is typically done once, which means that the resources are not used to their maximum efficiency. Finally, from a day-to-day operations perspective, IT has to interface with each system’s GUI on an individual basis.

Software Defined Storage (SDS) was supposed to help resolve these issues. The problem is that most SDS solutions only solve part of the problem. They provide a common interface and feature set, replacing the storage software already on the storage systems, and then provide a common management interface. While a common interface does make management easier, it does not make IT more efficient.

Data Centricity is The Next Step in SDS

The next step for SDS is to make IT more efficient. While a few SDS solutions provide migration of data between systems, the utility is designed to be a manual and rare event, not a live, active placement of data. It seems few SDS solutions provide automated data movement based on policy.

IT needs a Quality of Service (QoS) driven capability built into SDS that will move data between storage systems based on access patterns or performance criticality. In other words, instead of managing LUNS or Volumes, IT professionals need an SDS solution that allows them to manage data. They set policy on that data based on performance and data protection requirements. Then the next generation SDS solution moves data between the storage systems based on policy. It is critical that the SDS solution be able to apply these policies across the entire storage infrastructure, including multiple shared storage arrays and even internal server storage.

Sprawl? Bring it on!

Fighting sprawl may sound like the right thing to do but for some data centers storage sprawl is a reality that they can’t stop. There are too many moving parts. By using a next generation SDS solution, IT can embrace storage sprawl and add storage systems to address specific needs in the environment. A next generation SDS solution maintains sanity by automatically leveraging the new system to meet not only the current projects storage demands but the overall data center QoS objectives.

Sponsored by ioFABRIC

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25.5K other subscribers
Blog Stats
  • 1,939,251 views