Search
Languages
<
6 min read

Software-defined Storage Makes Economic Sense and Frees You From Hardware-defined Lock-in!

What does "software-defined" really mean?

Beware of Storage Hardware Vendors’ Claims That They Are “Software-defined”

It has become “it’s about the software, dummy” obvious. But watch what the sales pitches claim. You’ll see storage hardware heavyweights leap for the “we are really software” bandwagon, claiming that they are software-defined storage, hoping to slow the wheels of progress through their marketing talk. But, it’s the same old song they sing every year. They want you to ignore the realities driving today’s data centers and diverse IT infrastructures – they want you to not change from your past buying practices – they want you to buy more hardware! They want you to forget that the term “software-defined” is being applied selectively to what runs only on their storage hardware platforms and that when you buy their feature set it will not work across other components and vendors storage systems. Beware, their clever sales pitches may sound like “software-defined” but the end objective is clear: “buy more hardware.”

Software is the basis for flexibility and smart storage virtualization and management software can improve the utilization of storage resources so that you optimize and right-size to meet your needs. Hardware-defined by definition is rigid and inflexible therefore it leads to purchasing more than you want since you don’t want to underestimate your needs. Software can also allow the latest innovations like Flash-memory SSDs to be easily incorporated into your infrastructure without having to “rip and replace” your existing storage investments.

In other words, hardware-defined is the mantra for storage hardware vendors who want you to “buy more hardware” and repeat the same process every year versus getting the most value from your investments and “future-proofing” your infrastructure. Software-defined means optimize what you already have, whereas “Hardware-defined = Over Provisioning and Oversizing.”

Software Is What Endures Beyond Hardware Devices that “Come and Go”

Think about it. Why would you want to lock yourself into this year’s hardware solution or have to buy a specific device just to get a software feature you need? This is old thinking, and before virtualization, this was how the server industry worked. The hardware decision drove the architecture. Today with software-defined computing exemplified by VMware or Hyper-V, you think about how to deploy virtual machines versus are they running on a Dell, HP, Intel or IBM system. Storage is going through this same transformation and it will be smart software that makes the difference in a “software-defined” world.

So What Do Users Want From “Software-defined Storage,” and Can You Really Expect It to Come From a Storage Hardware Vendor?

The move from hardware-defined to a software-defined virtualization-based model supporting mission-critical business applications is inevitable and has already redefined the foundation of architectures at the computing, networking and storage levels from being “static” to “dynamic.” Software defines the basis for managing diversity, agility, user interactions and for building a long-term virtual infrastructure that adapts to the constantly changing components that “come and go” over time.

Ask yourself, is it really in the best interest of the traditional storage hardware vendors to go “software-defined” and avoid their platform lock-ins?

Remember One Thing: Hardware-defined = Over Provisioning and Oversizing

Fulfilling application needs and providing a better user experience are the ultimate drivers for next generation storage and software-defined storage infrastructures. Users want flexibility, greater automation, better response times and “always on” continuous availability. Therefore IT shops are clamoring to move all the applications onto agile virtualization platforms for better economics and greater productivity. The business critical Tier 1 applications (ERP, databases, mail systems, Sharepoint, OLTP, etc.) have proven to be the most challenging. Storage has been the major roadblock to virtualizing these demanding Tier 1 applications. Moving storage-intensive workloads onto virtual machines (VMs) can greatly impact performance and availability, and as the workloads grow, these impacts increase, as do costs and complexity.

The result is that storage hardware vendors have to over-provision, over-size for performance and build in extra levels of redundancy within each unique platform to ensure users can meet their performance and business continuity needs.

The costs needed to accomplish the above negate the bulk of the benefits. In addition, hardware solutions are sized for a moment in time versus providing long term flexibility, therefore enterprises and IT departments are looking for a smarter and more cost-effective approach and are realizing that traditional “throw more hardware” solutions at the problem are no longer feasible.

Tier 1 Apps Are Going Virtual; Performance and Availability Are Mission Critical

To address these storage impacts, users need the flexibility to incorporate whatever storage they need to do the job at the right price, whether it is available today or comes along in the future. For example, to help with the performance impacts, such as those encountered in virtualizing Tier 1 applications, users will want to incorporate and share SSD, flash-based technologies. Flash helps here for a simple reason: electronic memory technologies are much faster than mechanical disk drives. Flash has been around for years, but only recently have they come down far enough in price to allow for broader adoption.

Diversity and Investment Protection; One Size Solutions Do Not Fit All

But flash storage is better for read intensive applications versus write heavy transaction-based traffic and it is still significantly more expensive than a spinning disk. It also wears out. Taxing applications that prompt many writes can shorten the lifespan of this still costly solution. So, it makes sense to have other choices for storage alongside flash to keep flash reserved for where it is needed most and to use the other storage alternatives for their most efficient use cases, and to then optimize the performance and cost trade-offs by placing and moving data to the most cost-effective storage tiers that can deliver acceptable performance. Users will need solutions to share and tier their diverse storage arsenal – and manage it together as one, and that requires smart and adaptable software.

And what about existing storage hardware investments, does it make sense to throw them away and replace them with this year’s new models when smart software can extend their useful life? Why “rip and replace” each year? Instead, these existing storage investments and the newest flash hardware devices, disk drives and storage models can easily be made to work together in harmony; within a software-defined storage world.

Better Economics and Flexibility Make the Move to “Software-defined Storage” Inevitable

Going forward, users will have to embrace “software-defined storage” as an essential element to their software-defined data centers. Virtual storage infrastructures make sense as the foundation for scalable, elastic and efficient cloud computing. As users have to deal with the new dynamics and faster pace of today’s business, they can no longer be trapped within yesterday’s more rigid and hard-wired architecture models.

“Software-defined” Architecture and Not the Hardware is What Matters

Clearly the success of software-defined computing solutions from VMware and Microsoft Hyper-V have proven the compelling value proposition that server virtualization delivers. Likewise, the storage hypervisor and the use of virtualization at the storage level are the key to unlocking the hardware chains that have made storage an anchor to next generation data centers.

“Software-defined Storage” Creates the Need for a Storage Hypervisor

We need the same thinking that revolutionized servers to impact storage. We need smart software that can be used enterprise-wide to be the driving force for change, in effect we need a storage hypervisor whose main role is to virtualize storage resources and to achieve the same benefits – agility, efficiency and flexibility – that server hypervisor technology brought to processors and memory.

Virtualization has transformed computing and therefore the key applications we depend on to run our businesses need to go virtual as well. Enterprise and cloud storage are still living in a world dominated by physical and hardware-defined thinking. It is time to think of storage in a “software-defined” world. That is, storage system features need to be available enterprise-wide and not just embedded to a particular proprietary hardware device.

Summary

Be cautious and beware of vendors’ claims that deliver hardware but talk “software-defined.”

Maximize the Potential
of Your Data

Looking for higher availability, greater performance, stronger security, and flexible infrastructure options?

Contact Us Now

Related Posts
 
Cyber Resiliency Rating
Vinod Mohan
Cyber Resiliency Rating
 
Is Your Storage Ready For The AI Future?
Vinod Mohan
Is Your Storage Ready For The AI Future?
 
The CER Directive: An EU Framework for Cyber Resilience
Vinod Mohan
The CER Directive: An EU Framework for Cyber Resilience