We recently published a market survey, The State of Software-Defined, Hyperconverged and Cloud Storage, with interesting findings (see the full report here). Below, I want to focus on a couple areas that particularly caught my eye.
Business Continuity is Difficult
The research showed the biggest concern for storage infrastructure of any type is to minimize disruptions; providing business continuity whether on-premises or in the cloud. What is surprising to me is that this is a problem that IT has been trying to solve for decades. Why?
The answer is that business continuity is complicated. There is a difference between high-availability and disaster recovery. Despite what some vendors may say, a backup, or even a mirror copy, will not solve the issue. Business continuity is the assurance that in the case of a disaster, the business will be able to recover in a timely fashion and be able to operate.
There are many factors and decisions to consider in creating an effective business continuity strategy: mirroring or replication, synchronous or asynchronous, local and remote, how many copies of the data are needed, etc. A system should be able to ultimately meet the business needs (in terms of Recovery Time Objective, RTO, and Recovery Point Objective, RPO).
The part most often overlooked is the actual process of failing over, failing back, and restoring the system to its original state—ideally with no disruption and no manual process required. Runbooks and tests must be developed.
In 2019, business continuity will still be hard, but there are a number of technologies that can make it much easier.
Achieving Cloud Maturity
It now seems as if cloud is a standard component of IT. Most technologists understand when and how the cloud can be valuable. Lew Moorman, the former president of Rackspace, once told me ‘the cloud is for everyone, but it is not for everything.’ This is very true. Our research shows that 42% of respondents are not considering public cloud storage.
About eight years ago, IT looked at cloud as a way to reduce costs. Today, the industry recognizes that the primary value of the cloud is agility, not cost savings. In fact, many organizations have gone through cloud ‘sticker shock’ as usage metrics keep rising, using higher-class services (i.e. higher-class instances with more resources, provisioned IOPS, etc.). These additional costs (bandwidth, API calls, etc.) typically result in a large bill.
Our research showed 36% of respondents have realized the cloud is higher priced than on-premises options. Purely from an economic model perspective, the cloud is often like renting a car: convenient, and inexpensive for a few days, but not economical in the long run – especially at scale.
At the same time, everything is getting cloudier: what we call traditional on-premises is usually software-controlled, dynamically-scalable, virtualized infrastructure on a colocation environment.
While the cloud becomes an increasingly important tool for IT and continues to grow, traditional datacenters are not going anywhere. On-premises IT spend continues to grow.
Avoiding Storage Vendor Lock-in
42% of respondents said their top concern is being locked in to a storage vendor. Why? Storage hardware vendors offer creative multi-year programs that require long-term commitments. However, this often results in loss of freedom, high costs, and no flexibility.
This is especially important when taking storage technology waves into account. With each technology, it seems new vendors pop-up. IT departments then migrate to storage systems from these new vendors. One should expect that new technologies, vendors, and market leaders will emerge again in the next few years. Why limit your options to what will soon be a legacy vendor?
In an age where storage hardware is a commodity, it is expected that IT managers should try to avoid this lock-in. The value of storage systems increasingly lies in the services that are delivered through software: mirroring, auto-tiering, thin provisioning, etc.
For example, software-defined storage can deliver these services across a heterogeneous environment, often with more advanced capabilities than those offered by hardware manufacturers. There are other benefits to having this intelligent virtualization layer on top of software, similar to the benefits from compute virtualization: lower costs from automation, simpler management, fewer migrations, and the ability to enjoy a consistent set of services across storage systems.
It should not be a surprise then that our research showed that 56% of IT departments are strongly considering or plan to consider software-defined storage in the next 12 months. In fact, the intent to adopt SDS is almost double that of hyperconverged infrastructure (37% vs 21% respectively). With the freedom offered by SDS, there is no excuse for letting a hardware vendor lock you in.
Slow Adoption of NVMe
One of the new technology waves is NVMe, which will further advance flash performance by providing parallel connections to high-speed storage. Of course, adopting new technologies is hard. One reason for this is the cost to replace existing hardware with the latest. Our research showed that only 6.5% of respondents have more than 50% of their environment powered by NVMe.
Another reason is the complexity in migrating hardware, and building a full system that can benefit from the performance of NVMe-based storage. Research showed that about half of respondents have not adopted NVMe in their environment. There are a couple of options for IT departments that want to benefit from NVMe performance, including:
- Use Parallel I/O technology to provide similar parallel processing benefits for non-NVMe hardware. It is fairly common to see a 5X improvement in storage performance when using Parallel I/O and additional intelligent caching algorithms to remove the I/O bottleneck created by single threading.
- With software-defined storage and dynamic block-level auto-tiering, a small amount of direct-attached storage for NVMe can provide significant performance improvements to all performance-critical applications.
- Software-defined storage can also enable very efficient use of NVMe over a NAS by using Gen 6 fiber channel that supports up to 1.6 million IOPS in a single port. SDS helps unlock the performance potential of the system and eliminates the need for migrations.
- NVMe over fabrics is the future model for maximizing performance, but it has not yet reached a level of maturity that would enable mainstream deployment.
Not every application requires NVMe performance—not even flash. However, as with most emerging technologies, in the near future many new storage systems could be powered by NVMe-class storage.
The Evolution of Hyperconverged Infrastructure
Hyperconverged infrastructure (HCI) has offered datacenters the ability to develop fully integrated systems that can be deployed in a very simple way. However, it does not come without trade-offs.
Almost 40% of survey respondents said they are ruling out HCI because it does not integrate with existing systems (creates silos), is too expensive, results in vendor-lock-in, and can’t scale compute and storage independently.
As one would expect, HCI technology is evolving. A software-based approach can provide a hyperconverged system that has the capability to access external storage, and present storage to external hosts, breaking the silos and providing the ability to scale storage and compute independently. In addition, because it is one of the deployment models of SDS, a datacenter has the flexibility to migrate to HCI or hybrid-converged – or even back to a traditional model – as needed. To learn more about hybrid-converged, I suggest reading this post.
Interesting in learning more? Check out the full research report here or download our survey infographic!