Search
Languages
<
5 min read

Hyperconverged and Software-Defined Storage, Why They Go Together

Real-world scenarios of how a hyperconverged infrastructure is the right approach to combine storage, compute, networking and virtualization in one unit

One of the fundamental requirements for virtualizing applications is the underlying shared storage. Applications can move around to different servers as long as those servers have access to the storage with the application and its data. Typically, shared storage takes place over a Storage Area Network (SAN). However, SANs typically have issues in virtualized environments. The first is providing consistent, reliable I/O performance where it is needed. As different applications start, stop and process data, the load on the SAN varies greatly. If a database starts a large job processing data, the SAN may become overwhelmed, which will start impacting the performance of other applications that are acting in a normal state.

Hyperconverged and Software Defined Storage Why They Go Together

Applications that are performance-sensitive are particularly susceptible to this issue, including databases (Oracle, Microsoft SQL Server); applications and ERP systems based on databases (SAP, Oracle Applications, Microsoft SharePoint, Microsoft Dynamics); VDI (VMware and Citrix); and communications systems (Microsoft Exchange, VoIP). In addition, as the number of applications in the environment grows, IT needs to be able to scale out infrastructure seamlessly and quickly. Any time maintenance is done on a SAN, the storage needs to go offline, leading to a disruption. Another issue, especially in smaller environments such as remote sites, is the reliability and complexity of SANs. When remote or regional locations (retail shops, bank branches, manufacturing plants, call centers, distribution centers, surgeries, etc.) have applications on-site, IT needs to address issues with availability and management of the infrastructure. In the simplest case, an office has two servers to ensure high availability at the compute layer.

However, the servers are connected to a SAN (typically a low-end storage array and network connections), which itself is a single point of failure. If the SAN goes offline for any reason it doesn’t matter that there are two servers; the applications have an outage, which disrupts the business. Usually there are no IT staff on-site, so simplicity of management and reducing complexity are very important. Due to the challenges of using SANs in a virtual environment, organizations are currently looking for new options. Hyperconverged infrastructure is a solution that seems well-suited to address these issues.

Why Hyperconverged?

To provide consistent high-performance, IT can create application-specific clusters. By running the same type of application on the cluster (e.g. databases), IT is able to manage performance and identify/resolve bottlenecks more effectively. In addition, to avoid the performance limitations of a SAN, hyperconverged storage utilizes Direct-Attached Storage (DAS) within servers as shared storage, moving data closer to the applications. This architecture provides better I/O performance closer to the application (therefore creating better response times), resulting in less complexity and lower cost.

Reliable Application Performance

A hospital recently used DataCore Software’s Virtual SAN software to create a hyperconverged system to achieve better and more consistent application performance. The hospital had 12 physical servers running its PBX system. The organization wanted to virtualize this application (into 12 VMs) but it was essential to provide the same level of reliable performance as the physical servers (since voice communication is vital in a hospital environment). The hospital knew it wanted a dedicated cluster for the virtualized PBX application. But, its IT staff was not satisfied with available options, such as VMware Virtual SAN, which required a minimum of three physical servers (and later, they learned that actually four servers were recommended).

The consensus was that utilizing three servers to run 12 VMs was wasteful and unnecessarily expensive. Instead, the hospital chose DataCore’s Virtual SAN software. This solution only required two servers for failover, which reduced costs by 33% from the onset. In addition, DataCore Virtual SAN uses adaptive RAM caching to accelerate I/O. RAM is generally 10x faster than Flash storage, so the performance of the virtualized PBX was “through the roof.” In addition, the RAM caching meant that Flash storage was optional, further reducing costs for the hospital.

The last consideration was the ability to scale with a converged architecture, compute and memory scale with storage capacity. If more storage capacity is needed, but additional compute / memory is not, then the options are less than desirable. IT can either change the drives in the servers to offer higher capacity or add another server. However DataCore Virtual SAN, with its Integrated Storage Architecture, can utilize a central SAN to complement the direct-attached storage inside the servers. This means that additional storage capacity is made available from the central SAN and data resides on the tier that best matches its performance requirement. For example, “hot” data remains close to the server tier and “cold” data remains on the SAN. This option provided the hospital with the ability to optimize application performance and add greater flexibility to scale storage and compute/memory as needed.

Regional Support

For regional sites that need to run a mixture of workloads through a highly available infrastructure, the logical solution is to turn to the local storage in the servers into redundant shared storage, thereby increasing availability. In addition, reducing the amount of hardware needed for availability reduces the physical footprint of the infrastructure (which may be limited in a remote branch) as well as the costs. Lastly, by combining compute, network and storage into one infrastructure, the complexity of managing separate pieces is removed.

Software-Defined Storage Brings It All Together

There is a downside to hyperconverged storage. Each deployment becomes a separate storage system to manage and maintain. To ensure that yet another, separate data island isn’t created with hyperconverged infrastructure, it needs to be integrated into the overall storage infrastructure and management. This is where DataCore’s software-defined storage platform comes in.

By augmenting hyperconverged infrastructure with the capacity advantages and investments made in existing SANs, DataCore can scale storage capacity and performance easily and efficiently. More importantly, the DataCore SDS platform unifies all of the storage systems from different vendors and provides one set of comprehensive storage services across the entire storage infrastructure – under a single pane of management – so it is easy to administer the storage infrastructure and unify separate data islands.

“To provide consistent high-performance, IT can create application-specific clusters. By running the same type of application on the cluster (e.g. databases), IT is able to manage performance and identify/resolve bottlenecks more effectively. In addition, to avoid the performance limitations of a SAN, hyperconverged storage utilizes Direct-Attached Storage (DAS) within servers as shared storage, moving data closer to the applications. This architecture provides better I/O performance closer to the application (therefore creating better response times), resulting in less complexity and lower cost.”

Maximize the Potential
of Your Data

Looking for higher availability, greater performance, stronger security, and flexible infrastructure options?

Contact Us Now

Related Posts
 
Information Security and The Cost of Non-Compliance
Vinod Mohan
Information Security and The Cost of Non-Compliance
 
Key Technologies Shaping Modern Data Architecture
Vinod Mohan
Key Technologies Shaping Modern Data Architecture
 
Cyber Resiliency Rating
Vinod Mohan
Cyber Resiliency Rating