Webcast Transcript
David: And with that, I’m excited to introduce Manish, senior product marketing manager at DataCore. Manish, are you there?
Manish: Yes, I’m here. Can you hear me?
David: I can. Yeah. Thank you for being on. Take it away.
Manish: Thank you. I appreciate it. All right. Okay.
So thank you. So yeah, like he said, my name is Manish Chacko, I’m the senior product manager at DataCore Software. We’re the leading provider of software-defined storage. So basically today what I want to talk to you about is how to consolidate as well as accelerate your storage, using a combination of SDS or software-defined storage, as well as NVMe.
So thank you for sitting through the presentation. I know this is the last one, so I will try to make it as quick as possible.
Data and Performance Requirements
Okay. So starting off, I want to talk about the data and performance requirements. What we have found is that customer data nearly doubled every couple of years, but your storage budget not so much, roughly about 5 percent to 10 percent per year. So as storage admins and professionals, you’ve got a tough task in trying to manage the expectations of the business, with the budget that comes along with it.
So how do you solve that? So what should you consider? All right. So we’ll start off what are the questions I should ask myself? What are my pain points?
Performance is usually one of them, but we also have other considerations like datacenter costs, as well as Rackspace, operational costs, energy, cooling, things of that nature.
So the next question I like to ask myself is where or how does Flash actually help me provide this Flash NVMe. You’ve got to remember that a terabyte of Flash is still over orders of magnitude more expensive than the terabyte of a HDD drive.
What we have found at DataCore is when we have looked at customers is that 80 percent to 90 percent of our customers, they seem to have 10 percent of their data generate about almost 90 percent of the I/Os. So there’s opportunities to optimize those I/Os without having to update your entire infrastructure. You can [specifically] based NVMe, Flash, and use software-defined storage to optimize the bulk of your I/O. That way you can manage your storage infrastructure.
And then you’ve got to ask yourself questions, like, okay, what is the performance guarantee? And what is a sustainable or expandable entry into Flash? Let’s talk about that in our next slide.
So we all know Flash is fast, but how fast really is it? If you look at this graph on the X-axis you have 80 blocks of data. It starts at 100 percent read and zero percent write at the left, and on to 90 percent read, 10 percent write, and so on and so forth, until you get to zero read and 100 percent write. You’ll notice that the latency starts at a, you know, 500 milliseconds, pretty fast, and then slowly it moves up to the 4-plus second category.
What we can, at DataCore, provide is software-defined storage. Basically they’re using server-side cache to give you consistent and very low-latency, regardless of the workloads that you’re having. If you’ll notice that the same kind of workload, with the addition of software-defined storage, be able to maintain a content 500-millisecond latency, roughly, depending – regardless of whether we start at 100 percent read, all the way to 100 percent write.
Funzionalità
One of the features of our software-defined storage SANsymphony, is the random write accelerator. And by the way, these figures are available on our website, so if you simply Google “DataCore random write accelerator,” you’ll see these figures and the explanation of, you know, how you’d [unintelligible 00:04:39]. Basically what we did was we looked at a set of hard-disk drives, and we were like, okay, we’re getting about 327 IOPs. And then the add – the DataCore SANsymphony product and with our random write accelerator and cache, we were able to increase that to 11,000 IOPS, which is, as you’ll notice from the graph, it is more than what a SATA, SSD, which in this case has 10,000 IOPs being generated.
And so you go, okay, why not just stick with that? Well, if your IOPS requirements are even greater, you’ll notice that in the case of the SSD, we were able to boost the IOPS from 10,000, which is tough, being the laws of physics, if you will, to about 36,000. So significant improvements in the IOPS that you can get out of the existing infrastructure, as well as when you upgrade, you get more than what is theoretically possible simply because we are using the cache and taking in I/Os as fast as they can come, and then writing it, committing as it is possible.
The next thing I want to talk about is the parallel I/O technology that we have. Usually when you’re doing serial scheduling, you have a bunch of cores that are idle. And so you’re naturally optimizing all the hardware that you purchased. If you look over on the right side with the parallel I/O, we are making sure that all the cores that you purchased are used. So NVMe takes advantage of the low-latency data paths, and in combination, rather, with our parallel I/O, we take advantage of multiple cores. So we’re giving you a one-two punch. So it’s not just the NVMe Flash, but they’re also providing you with the parallel I/O to even further boost the performance that you get.
The last picture I want to talk about is auto-tiering. So the way SANsymphony works is it determines how hot, if you will, your data is. And then depending on how frequently it’s accessed, it puts it in either Tier 1, Tier 2, 3, Tier 3, all the way up to Tier n, which means 15 tiers. And basically what we do is we go in the NVMe Flash, probably Tier 1. And then your Tier 2 could be your all Flash storage array. Tier 3 could be hybrid. And then Tier 4 could be your spinning disks.
Also, the advantage is that these could be from multiple vendors. We don’t care about what hardware vendor it is. We support all of them, or all of the major ones. And basically, as well as using as fiber channel, we can help you manage the multiple different storage vendors that you have, either because of mergers and acquisitions, or perhaps a business need had changed, and you had to go to a different vendor for whatever reason.
Also, I want to mention that – I mentioned previously that we do caching. We definitely use [as much RAM] possible to speed up what the application hosts sees as the disks. So basically we are providing a logical disk up to the application host, and the application host thinks it’s got the fastest disk possible. And we simply use as much RAM as possible since RAM is cheap these days, it’s a quick-and-easy way to boost the I/O available to your application host.
Software-Defined Storage
So let’s talk about SDS, or the software-defined storage. All right. So software-designed storage. All right. So you consume SDS with your physical servers, your virtual machine, and now of course with your computers. The access methods that you use or fiber channel, ISCSI, NFS, or SMB. The operations and insights that you typically expect, you know, provisioning, data migration, your data, real time and historical, health and performance, and then of course learning analytics and observation. I’m not going to go into each one of those, but that’s basically what people look for in a modern software-defined storage solution.
And then of course managing the command and control. That’d be rest APIs and the ability to use PowerShell commandments to write your custom scripts. And then of course plug-ins for [unintelligible 00:09:18], and of course all managed by a robust console.
We support these storage protocols. We have NVMe, fiber channel, ISCSI, SAT and SATA, of course. And then cloud, which, you know, arguably is the destination and a protocol in and of itself.
Last and not least, the data services that SANsymphony provides. I’m not going to get into each on of these, but we already talked about auto-tiering, caching, as well as the random write accelerator.
I want to briefly touch on CDP, or continuous data protection. Basically, these store your data up to a second granularity. I like to call it TiVo for the storage infrastructure. So if you get hit by ransomware or any other virus or issue, you can almost “rewind” back to a certain day at a certain time, up to even a second as needed. And you can recover from that, and your business can be done. So it’s a great business continuity disaster recover solution.
Of course, there’s the usual suspects that people expect – encryption, load balancing, synchronous mirroring, asynchronous mirroring, thin provisioning, things of that nature.
Audience Poll
David: Manish, we’ve got the poll ready now. I’m just going to –
Manish: – Excellent. Yeah.
David: Yeah, let me go ahead and bring that up. So the question on the screen you should see is do you already use SDS or software-defined storage, and/or HCI? And the answers are yes, we use software-defined storage; yes, we use hyperconverged infrastructure; yes, we use both; or no, we’re not using either one; or no, maybe you’re doing an evaluation.
So I already see the answers rolling in. Thank you, everyone, for those responses. Let’s get a few more responses. And I will share the results with you. And I’m curious to find out what everyone is doing out there.
All right. Looks like lots of responses came in. Let met share this with you. And it says 38 percent right now are not using either software-defined storage or HCI; 25 percent are currently doing evaluations of these; 9 percent use software-defined storage; 16 percent HCI; and 9 percent use both.
What’s your take on that, Manish?
Manish: Excellent. Yeah, this was good information. It tells me that there is either not a need for SDS and HCI in 38 percent, or they’re currently just trying to get more information. And then the rest of the audience is either evaluating or already in progress with their SDS plans. So that’s good to know.
David: Okay. Great. And I think you put you back on the slide you were on. Is that right?
Manish: I believe that is correct. Yes. I appreciate it.
David: Sure.
NVMe Flash + Software-Defined Storage = Better Performance
Manish: So moving on. Like we talked earlier, there’s two things we’re trying to do. We’re trying to consolidate and improve your performance of your storage infrastructure, right. One way to do it is with NVMe Flash. The other way is you do it with NVMe Flash, in combination with software-defined storage, so you get an even better performance gain.
So like we mentioned earlier, oftentimes, a tiny portion of the data is causing the major request or requirement of your I/Os in your infrastructure. So basically, what we do, you know, you add in your storage, your NVMe Flash. And then also, at the same time, add in your – if you know the DataCore [codes], whether they be in software-defined storage or hyperconverged mode. And basically, we – what your command is, if you don’t want to impact your SLA, then simply at the time of provisioning, make sure your NVMe drives are already in your DataCore SANsymphony servers, or simply utilize a hot-plug NVMe U.2 drives.
Okay. Like I talked about earlier, auto-tiering, make sure that frequently accessed or hot data is placed on your fastest tier. In this case, Tier 1 would be NVMe Flash. And so we make sure that we move it in this scenario view, now taking the data from your SAN, and you’ve moved it to the software-defined storage layer, specifically on the NVMe Flash, but you can get the best performance possible out of it.
And of course, as you can see, the data is moved permanently to the software-defined storage layer, and you are now running as fast as possible with NVMe Flash.
Now let’s talk about another scenario with software-defined storage. Right now, in this case, we see that the DataCore SANsymphony servers are managing your storage area network, and they are abstracting the hardware from the software layer, if you will. Storage virtualization is what we like to call it. So the application host simply requests a certain disk. We go, okay, you know, we’ve got two SANs. Each of them have 20 terabytes free. The application host wants 40 terabytes. We present a logical 40 terabyte drive to the application host. And then from the backend, we can assign 20 terabytes to each SAN depending on speed and access methodology.
So this way you can consolidate. If you have multiple hardware vendors in your environment, you can now use a combined storage without having to rely just one what’s available in one vendor.
We can also do upgrades, repairs, any kind of maintenance you want on your current SAN without affecting your SLAs. So basically, you add your new SANs, and now you copy your data from your old SANs, but in this example is an upgrade from your old SAN to your new SAN. Data is copied in the background. We don’t care about source and destination. We make sure that we move all your data.
And as you can see, there is no impact on the service-level agreement that you have with your business. Your applications have continued to work as usual. The data is up and available, but we’re simply making copies in the background to the new SAN, whether it’s a new accessory, or perhaps it’s just replacing aging gear. You can do this live in production without having to schedule a downtime.
One of the things that we – information that we got from telemetry, which is quite surprising, pleasantly though, was the fact that most, if not all of our customers, schedule their maintenance or upgrades on [event] day, during the middle of the day because they’re confident that their SLAs are not going to be impacted, and they would like to see the new solution up and running for a few days. And so they’re doing it in the middle of the day.
And as you can see, the final step is you replace the aging gear. You transfer all the data over. You can now start using your new SAN, and you can, of course, you know, dispose of your older hardware as you wish. And the application host is none the wiser. And this is also one of the advantages of software-defined storage. If you wanted to enable encryption or anything like that at the backend, the application host continues business as normal. It sees the drive, and it’s trying to write data to it, send I/O to it, and manage the – all of the data services at the backend will be transparent to the application host.
With that, we come to our final slide, and like I promised, I kept it quick and painless, or relatively. So this slide, I pretty much want to just summarize what we talked about. NVMe Flash and software-defined storage, both in combination to give you the best bang for your buck, and let you get the most performance out of your storage solution. You can run DataCore SANsymphony in software-defined storage mode, whether you’re abstracting hardware from the software, providing storage virtualization if you will, where you can aggregate data across multiple SANs, perform maintenance, and manage all of your data from one console. Or you can also hyperconverged infrastructure where you want to actually replace some of your SANs with x86 hardware, servers running storage, computer, network, all in one box.
To that point, I want to briefly talk about the HCI Flex Application that we recently released. As you can see, the photo on the right-hand side, no surprises there. It’s a Dell PowerEdge R740 Server. What we have basically done is installed the hypervisor. We give you an option of VMware ESXi or the Microsoft Hyper-V, along with SANsymphony software and HCI configuration. And we make sure that everything is bundled up, and you get turnkey appliance that you can just rack and stack, answer two or three questions that are specific to your environment, and be up and running within the hour. A couple of times that I tried it, I was ready in 20 minutes, but it all depends on your environment and the needs of your organization.
And so finally, I just want to quickly say I know I’ve given you a lot of information. There’s a lot of information on your website. But at any point, if you want to give us a try, you can either – you can go to datacore.com/tryitout and either do the try-it-now option, or schedule a live demo, maybe a 30-day trial for software only. So you can download the software, and you can install it on most x86 hardware. The requirement – the minimum requirements and things of that nature are listed, so you can quickly try this with a 30-day trial. Or perhaps speak to one of our knowledgeable sales engineers.
The good thing about our product also is that our licensing is pretty straightforward. We charge you per terabyte, and that’s it. We don’t care how many processors you have, how many cores you have, all that’s relevant is simply charged based on the amount of storage you want us to manage.
Q&A
With that, I will move on to the final-final slide, which is the questions. So David [phonetic], I don’t know if you’ve been receiving questions, but if you have I will attempt and answer them. If not, I will have to get back. But I will hand it over to you.
David: Absolutely. Yeah. We do have some questions here for you. So I’ll just pop up this poll question that says what additional information would you like about the DataCore solution? I’ll just leave that up for the audience while we do some Q&A.
I think you answered the first one. Brian [phonetic] is asking is there a free version for lab or testing. I think you said there is a 30-day free trial. Is that right?
Manish: There’s a 30-day free trial, and also our partners and our distributors get our NFR licenses. You can get it for 30 days. If for some reason you have – need some more time, you can talk to our sales engineers. I’m sure they would be more than happy to give you an extension. So yeah, absolutely, you can try it and find out how your workloads behave.
David: Excellent. Question here for Tyler [phonetic]. He’s asking – I guess he’s already looking at the pricing on your website, and he says, “The pricing on the website is in one terabyte increments. Is that a yearly cost or a monthly cost?” I’m not sure if you can comment on pricing on here, or if we should get back to Tyler.
Manish: Well, I don’t want to get too specific into pricing because sales people will come and smack me on my wrist. But I will tell you that we have two available licensing models. One is a perpetual model, and the other is subscription. And the subscription pricing is yearly, and the perpetual obviously there is expiry.
So depending on if you’re interested in CAPEX or OPEX, you choose the model that you want. You choose how many terabytes. Obviously, the more you buy, the better discount you get. I’m not going to get into specific numbers, not only because I don’t know it, again, that’s something I let the sales folks talk to you about. But hopefully that answers the question.
David: Absolutely, yes. Another question here, “Is DataCore a base-level storage OS? So if you had a physical host, and you wanted to put DataCore on it, do you install an operating system first, and then DataCore, or do you use it, just install DataCore? How does that work?
Manish: Yeah. Good question. So DataCore is a Windows or Win32 application. We install it currently on Windows Server 2012 and 2016. We’re almost done with our qualification of Server 2020, which will be out in a couple of months. But basically, if you have an ESXi hypervisor, you can install it on a Windows VM. You can also, of course, have Windows Silver, with the Hyper-V enabled, and install it either in the real partition or in the VM as you wish. So quite easy to install and manage.
David: Okay. Excellent. And then Todd [phonetic] is asking, “For your software-defined storage, is it a head-in appliance that goes between virtual hosts and existing SANs, or does it also host data itself?”
Manish: Great question. I’m glad you asked that Todd. So basically the answer is yes, all of the above. So our hyperconverged infrastructure appliances come with storage and compute all in one box. But let’s say you have additional storage. You can also connect your external compute hosts, whether they’re physical or virtual, to connect to the storage on our HDI appliances. Or if install where you have installed HDI software on, if you’d like to do it yourself.
And of course, the reverse is also possible if you have an extra compute on the box, and you’d like to consume additional storage, you can also connect to an external SAN, either with ISCSI or the optional fiber channel card that can be purchased with the larger appliances. And then you can now, using the compute in the box, connect to as much storage as you want.
So, yeah, obviously you’ll have to get into the licensing with the salespeople. But basically, we give you the flexibility of using any and all of your existing infrastructure or adding new and connecting of both, yes.
David: Okay. Nice. Let’s see. I think probably the last question we have just to clarify, DataCore, so you can install it on a physical host, and there’s hardware compatibility list, or DataCore offers a hardware appliance that you can purchase from DataCore, so it’d be hardware with the DataCore Software already on it. Is that true?
Manish: That is correct. We are – so DataCore SANsymphony is a software, so you can download it and put it on an x86 hardware, on any virtual machine. But you can also – if you don’t want to – if you want a simplified turnkey appliance, and you’re not interested in sizing your hardware, and purchasing things of that nature, and you want just hand to shake, one throat to choke, if you will, you can buy our appliance. That way you get everything preconfigured. You just have to answer a couple of questions, and the installation is done.
When there are issues, whether with sales ordering or with technical support, you have one number to call. And we make sure if it’s a hardware issue, we get you the hardware warranty from sales. And that way you don’t have to call sales and Microsoft, DataCore, and a bunch of people to get your issues resolved. You just call the one number that comes when you purchase your appliance.
But yeah, I – we like to offer our customers the choice. Some customers have specific hardware needs, which an appliance may or may not need, and so they like to have the software, so you can go ahead and use that.
David: Awesome. Well, I think you answered all the questions we had. It’s been great having you on. Thank you so much, Manish.
Manish: Yeah. Thanks, David. Appreciate it.
Iniziamo mettendo la pietra angolare del data center software-defined di nuova generazione
Altre risorse
- Software-Defined Storage is Critical for Modernizing the Data Center
- All-Flash Array Buying Considerations: The Long-Term Advantages of Software-Defined Storage
- If I Use NVMe, Why Do I Still Need Parallel I/O?
- DataCore SDS: Enabling Speed of Business and Innovation with Next-Gen Software-Defined Building Blocks