Speed, Space or Flexible Infrastructure: Which Matters Most for Data Center Storage?

Share:

By: Jacqueline Lee |

The sheer volume of data generated, written and stored in today’s data center — as much as 2.5 quintillion bytes per day, according to VCloudNews — is driving a significant need for more storage. However, more storage in the form of solid-state drives or hard-disk drives means less space and energy efficiency within a data center. Enterprises can turn to remote object storage to free up space, but this storage strategy creates latency problems that drag down speed. So, the question remains: How can companies balance speed, space and a flexible infrastructure when it comes to data storage?

What Is Software-Defined Storage?

The answer to the speed and space dilemma is software-defined storage (SDS), one of several emerging flexible infrastructure frontiers. While similar terms, such as software-defined networking, are readily understood by the C-suite, the meaning of SDS is tougher to articulate.

SDS consists of software running on physical server hardware, supported by devices such as flash storage or disk arrays that can operate within the server or as part of a network-attached storage device. This infrastructure makes it easy for enterprises to provision new storage resources according to the data center’s varying needs and get more use from otherwise-idle resources.

The following are key capabilities of SDS:

  • Hardware becomes separate from the software that manages it, allowing vendors to focus on software without manufacturing hardware.
  • Storage management software becomes hardware independent, meaning one vendor’s storage management application may work with a wide range of data center hardware.
  • Software can set policies and automate many storage services, such as backup, snapshots, thin provisioning, replication and deduplication.

When done right, SDS can solve many speed, space and energy-efficiency challenges while cutting costs and making more resources available.

File vs. Object Storage

Before object storage came along, files had to be interpreted by the applications that created them, and they were organized into root-and-branch hierarchies. Developers were tasked with predicting how much storage their applications would require and had to write code for navigating between directories or retrieving files from different devices.

Object storage flattened hierarchies and made search and retrieval faster due to attached metadata. Scalability became easier, and data center administrators could enable policy management for different objects. However, latency makes object storage less than ideal for primary storage — block storage is better for accessing information that needs to be retrieved frequently and repeatedly.

Even so, developers don’t want to guess how much storage they’ll need for their applications. Further, data center administrators don’t want to buy flash and disk arrays according to predicted storage needs, only to have them sit idle. SDS helps data centers make use of the resources they already have, striking an ongoing balance between speed and space. It manages ever-changing combinations of object and block storage, enforces policies for storage functions and gets maximum usage from resources that already exist.

SDS Flexible Infrastructure Acquisition

When making the move to SDS, data center administrators tend to use one of the three following acquisition models:

  1. Software Only: If administrators have sufficient SSD and disk array resources, they can purchase software for provisioning and orchestrating storage within the resources they already own. This is the least expensive way to move to SDS, but it requires the most skill to implement.
  2. Software Plus Commodity Hardware: Vendors often provide software that manages SDS coupled with commodity hardware options, which are cheaper than custom options. These cost advantages can be offset because commodity disk arrays often require more storage space and power. Commodity solutions can also be less efficient at getting the most from advancements in SSD.
  3. Software Plus Custom Hardware: Custom hardware is more expensive, but it’s good at handling the central processing unit overhead generated by SDS, and it’s better for getting the most from flash.

Today’s users want instant access to API-driven resources from their cloud applications. These cloud applications process massive sets of data, and they require access to both block and object storage. By transitioning to SDS and other flexible infrastructure alternatives, data centers can provide public cloud-like speeds and get more storage space with less hardware.

Topics: , ,

About The Author

Jacqueline Lee

Freelance Writer

Jacqueline Lee specializes in business and technology writing, drawing on over 10 years of experience in business, management and entrepreneurship. Currently, she blogs for HireVue and IBM, and her work on behalf of client brands has appeared in Huffington Post, Forbes, Entrepreneur and Inc. Magazine. In addition to writing, Jackie works as a social media manager and freelance editor. She's a member of the American Copy Editors Society and is completing a certificate in editing from the Poynter Institute.

Articles by Jacqueline Lee
See All Posts