Webcast Transcript
Hello, everyone. Good morning on the West Coast. Good afternoon, if you’re on the East Coast, or good evening if you’re somewhere else in the world listening in to this webcast. Thanks for joining us today. My name is Sander Puerto. I’m the senior product marketing manager here at DataCore, and I will be your moderator for today. We would like to welcome you to today’s webcast where you will learn how to achieve 100% storage uptime for years. And we have City of Carmel here to share with you how they did it. Great.
So, before we get started, I just want to go over a few housekeeping points. This presentation is being recorded, and it will be available on-demand, and you’ll be receiving an email from BrightTALK with the link to be able to watch the on-demand replay, and that will happen after the conclusion of this webinar. Lastly, feel free to ask any questions while we’re going through understanding how City of Carmel was able to accomplish these benefits. Feel free to just ask any questions, and we will address them as we continue to go through the content. Okay?
Just so we understand the agenda for today, we’re going to go through some quick introductions to get started. And then we’re going to segue into the problem, understand what was the solutions that were available, and the solution that was eventually picked to be able to achieve these goals. And then we’ll talk about the outcome, how this affected the business, right? How it ended up becoming very valuable to the business. Then at the end we will just get a high-level understanding of what the end result was. Then we will jump into software-defined storage. For those that are not too familiar with software-defined storage, we’ll just briefly explain to you what it does and why it should be something that you should consider for your company.
As far as the Q and A, we’ll go ahead and be able to answer any additional questions that may come up at the end, but we’ll try to address all the questions as we’re going through all the different bullet points, right? And at the very end, we’ll talk about what’s next, what would be our next engagement or next webcast. And we will also pick a winner, right? So, you have to stay through the webinar because at the end we will announce the winner. Again, feel free to ask any questions in between. And just so that you guys know, the actual prize is a $200 gift card from Amazon. So trust me, I would stay through the whole webinar to listen and be able to know at the end whether I was the winner or not.
Great. So, let’s quickly talk about who we have here today on this webcast. First, we have Brian McClesky. He’s a systems engineer with Mirazon. He’s got many years of experience basically architecting software-defined storage or any enterprise-level storage. And then we have Rebecca Chike from systems supervisor with the City of Carmel, and then you have myself. Again, I introduced myself at the beginning. And with that said, Rebecca, if you can tell us a little bit about who is City of Carmel, if you could do that, please.
Rebecca: Thank you. Hello, everybody. City of Carmel is a suburb of Indianapolis. We’re located just north. We’re one of the fastest growing cities in Indiana. We have a lot of award-winning schools, thriving businesses, and we have a lot of family-orientated neighborhoods. We also have been named Best Place in America to Live by Money Magazine, and we also—it’s not listed on here on this slide—but we have a lot of roundabouts. We have over a hundred in the city and that one of the selling points of Carmel. It’s a great place to live, and I’ve been here since 1965.
Sander: Wow, that’s pretty interesting. The roundabouts, so that means people have to slow down a little bit, right, when they drive.
Rebecca: Yes. It’s taken down accidents too.
Sander: Yes, that is great. Thanks for sharing that, Rebecca. And I will quickly jump on and talk to Brian about Mirazon. Can you share with us, Brian?
Brian: Hey, good afternoon. Thanks for having us here. We are a little based IT solutions provider, and we specialize in storage virtualization, network, pretty much anything enterprise technology. We’ve been a DataCore partner for approximately 15 years. We’ve successfully implemented hundreds of deployments with DataCore. We also pair those with various virtualization platforms, whether that be VMware or Hyper-V. Typically we see DataCore a lot with virtualized storage and virtualized compute networks. We’ve got many years of experience training, storage, networks, and connectivity with virtualization platforms. We have a lot of different areas of expertise, and we try to fill in gaps and supplement where we can. We also build all of our solutions custom for each of our clients. There’s no one-size-fits-all solution for anything. So we like to right-size the solutions that we choose to work with.
Sander: Excellent. Thanks for that introduction, Brian. And so one of the things here with Mirazon, right, is you guys have known DataCore for a long time. So, you definitely have been able to handle many different accounts, and I think City of Carmel is one of the oldest accounts that you’ve been able to handle, right?
Brian: That’s correct.
Sander: Okay. Excellent. All right, so let’s jump on to understanding more about how these success stories came about. But before we do that, let me tell you a little bit about DataCore. We are basically a pioneer in software-defined storage. We’ve been doing this for 20 years. And so we’ve had obviously a huge expansion, globally, over 10,000 customers, 30,000 installations. And the great thing about it is that 95% of companies that go with DataCore, end up keeping DataCore, right? And so that tells you how reliable the product is able to deliver in the case of City of Carmel, they’ve been with DataCore, if I’m not mistaken, for 13 years. So that goes to show you that the software delivers what we promise. We’re headquartered out of Fort Lauderdale, and obviously the software has done a lot of great things, and has a lot of great features which has allowed us to win awards, right?
We have been the historic product of the year 5 times, and we’re definitely proud of it. So, our objective is to be able to show you what the software is doing for companies already, and at the end, you guys get to decide if this is something that can really help you or not, okay? So let’s talk about the first thing here with the City of Carmel. And I’m going to ask Rebecca to just take us a little bit through the situation, right, when you guys—and I know this is a while ago, right? Over a decade ago, when you guys had to make a decision and you have challenges that you had to meet requirements. And so just take us back to that time on how you came about making a decision on DataCore, and some of the pain points that you were dealing with.
Rebecca: Well, we were trying to go from a small city to a larger footprint in the world, and one of the things we had was very limited budgets. We had critical apps that had to be online. The infrastructure for cities is very important. So, we were going from physical servers to virtual servers, trying to maximize our—I guess—the bang for our buck. That’s the way to say it. We were trying to be as fiscally responsible as we could. DataCore was one of the things that was found at the time and it gave us the ability to put a lot of storage on very generic, very cost-effective platforms.
Sander: Excellent. At that time, obviously, through the evaluation process and deciding who to go with, were there other players in the picture besides DataCore? That’s, if you can remember. It’s been a while.
Rebecca: That was a long time ago. Actually, I think at the time a lot of the storage solutions were looked at, but DataCore won out because it, again, it went on all platforms. It really was very storage-generic and hardware-generic, and that allowed us to spend our money wisely. So that’s one of the reasons we ended up with that. And another good reason was because when you have to add storage, you didn’t have to do what they call a forklift. You could just add to it.
Sander: Exactly. So, overall it was flexibility in many different phases, right?
Rebecca: Yes.
Sander: And so one thing that I wanted to ask you here, as far as the applications that you’re running today that DataCore is able to help you, what type of applications do you have in your environment?
Rebecca: We’re running email servers on it. We’re running SQL Servers on it. We’re running voice-over-IP servers on it. We’re running GIS servers on it. It’s our basic server platform across the board for everything.
Sander: Wow. So 100% of your environment is running on DataCore?
Rebecca: There are some odd physical ones, but those are very specific and cannot be virtualized. So for, I would say, almost 99%, but yeah, a great deal of our systems are virtual.
Sander: Excellent, excellent. And my understanding is that most of the city is relying on this backend storage to stay running. And in some of those departments that you guys are making sure that are depending on you, right? Your IT department, that’s the police and the fire department. Have they had any moments where they pretty much have been happy that they have DataCore, or they don’t even know that? They just know that it’s running?
Rebecca: Correct. They don’t even know the backend, but they know it’s running. We go from everything from street departments, police department, fire department to the permitting, and our department of community services, to our human resources. Every department has storage on the system. So if there is an issue for whatever reason, for instance a fiber cut, one side stays up and the other side is down, and users don’t even know it.
Sander: Excellent, excellent. And so there is a question that came up while you were talking. It says, “Can you speak to vendor lock-in regarding DataCore?” I know you already mentioned it, but can you just elaborate a little more on that, on what have you experienced when it comes down to vendor lock-in for the last decade?
Rebecca: I’m sorry. Say that again.
Sander: Yes. That was a question that came from the audience regarding vendor lock-in as far as the hardware is concerned. And so over the last 10 years, I know that you mentioned that you could use just about any hardware, but just to answer the question from the audience, what type of things have you been able to exchange, just basically remove the pieces and put something else in?
Rebecca: Ah, I see. One of the things is, we use HP servers, and if we have to replace a server, we just take one side down, do the work, we’re running on the other side, we have a stretch cluster. So we run on the other side and do all the work we need to do on the hardware on the second site. And again, nobody knows. We also have done the same thing when we finally did do a refresh on our storage here last year. I think that was what, five years before, since the last one? Again, we just took one side down, did the work, put it back up, took the other side down, did the work, and…
Brian: On the other side of that, you can choose what hardware you want to run on. You don’t have to have HP. You can run Dell. You can run IBM. You can run Cisco, whoever you want. Really at the end of the day, it doesn’t matter. Whatever your preference is, is up to you. But the solution that allows that agnostic capability, that way you can choose, say, this solution that we have today may not be the best solution for us 5 years from now. We get the ability to change that if we want, and keep the software and the licensing, and not have to buy everything all over again. So, that’s one of the main benefits of the software.
Sander: Excellent. That’s a perfect segue to talk about the solution, Brian. I know that you guys have been working with City of Carmel for a long time. Can you tell us more about that relationship, and how you’ve been able to help them with DataCore?
Brian: Yeah, I’m kind of new to the mix in the long-term picture, but Mirazon is not new to the mix. We’ve been working with City of Carmel for probably a little over ten years now. We help them design and implement the DataCore stretched VMware cluster that they have today that runs their entire server workload. They’ve got two data centers essentially split what is about a mile or two apart. There’s dark fiber connectivity between those. So we’re mirroring everything synchronously 100% on the backend. So, it gives us a lot of flexibility. And DataCore really is the foundation of the solution. VMware is definitely part of it, but without DataCore, none of it would be possible. So, DataCore is mirroring all of our storage. We have 2 active-active data centers essentially running our virtual workloads. We’re in a MetroCluster configuration with VMware HA. So if anything happens at site one, within a few minutes, everything’s up and running at site two. Storage-wise, if something happens at site one or site two, it doesn’t really miss a beat. It just keeps on trucking.
Rebecca mentioned the recent fiber cut that we experienced. There’s a lot of new construction as we’re growing the city, so accidents do happen and no one is immune to back hoes. So, that actually happened and we knew it happened, but the customers and the end users had no idea. They were completely oblivious that anything major was going on. So, that’s one of the main things that this solution can provide.
The flexibility that it adds is bar-none. Just about a year ago, we did an upgrade on the network. We took things from 1 gig network to a 10 gig network. We were able to do that with zero storage outage. We were able to do that with nobody knowing that we did a full network core replacement with zero downtime on the storage. So it’s pretty flexible in that regard.
We’ve also done multiple hardware refreshes on the SAN side, the storage side, where you get your useful life out of some hardware and you need to upgrade to the next thing. So this solution allows you to do that in a way that doesn’t require downtime. You can pretty much do a full SAN migration with no downtime, and that to me is huge. I came from a background of keeping things up and highly available before working with customers, like City of Carmel. And where uptime matters, this is the solution. This is the way to go.
It gives you workload mobility. For example, if you wanted to do power maintenance in one of your data centers, that’s typically a full outage. You have to shut your servers down. You have to schedule the electrician in to do whatever he needs to do, say you’re replacing batteries on your UPS, that kind of stuff. That’s typical downtime. With this type of solution, we can pretty much get things in a state and move the virtual servers to the other site, take site one down, do whatever maintenance we need to do. And once things are back to normal, bring things back up, and we can move the workloads back to where we want them. So, it gives us a lot of flexibility in planning things as well. That’s really the big value add here.
Rebecca: Yeah. One other thing I’d to say too is the ease-of-use of DataCore. You configure it and I just let it run. I let it do its thing and it has been—the ease of use is amazing.
Brian: The performance gains are really nice too. We’ve seen some backend storage failures on hardware before and the frontend servers had no idea. They’re not seeing any high latency. They’re not seeing any performance degradation at all. DataCore takes the hit for you, and you can work on the backend hardware while things are still running. There’s a nice buffer there.
Sander: Excellent. There’s a question here from the audience regarding the upgrade process to DataCore, and maybe Rebecca, you can talk about that. “Were there any issues migrating from what you had onto DataCore?”
Rebecca: That was many, many, many years ago. And I cannot remember any issues when we actually made the original changeover. We built it from scratch and went on. Anytime we have done an upgrade on the versions, because at one time it was SANMelody and we went to SANSymphony, when we did that it was just take one side down, upgrade one side, get it all running and then do the other side. So I’ve—no, I never had any issues.
Brian: As far as upgrades go. I’ve done a lot of these over the years. We actually liked the pass through disk idea. You can pass through LUNs from your existing storage solution through the DataCore server and serve them to your frontend servers. And you can then allow the DataCore software to mirror that storage from your old solution to your new solution in a seamless way that that’s not going to impact your workloads. So that’s a really easy way of doing it. Another methodology we use is storage vMotion from VMware. You can implement new hardware, new storage, serve up new data stores to your VMware host. And then you can use Storage vMotion to vMotion those server workloads from the old storage to the new storage. And you can do all of that with zero downtime.
Sander: Excellent, excellent. And another question we have here is, “Is the storage network right now iSCSI or fiber channel?”
Brian: It is iSCSI. It’s been iSCSI, I believe since the beginning here, but Mirazon has customers with iSCSI and fiber channel. It works great with both protocols. iSCSI gives you a little bit of flexibility in these stretch scenarios just because you’re not dealing with a fiber optics fiber channel switching. It gets really expensive when you’re trying to do metro clustering with it. So, as far as cost effectiveness goes, iSCSI is a really good solution for that.
Sander: Okay. And I don’t know if you already mentioned it, but what type of connectivity do you have between the two buildings?
Brian: It’s a dark fiber connecting the core switches, so we have a pair of core switches at both of the data centers, and they’re connected redundant. It’s pretty standard stuff.
Sander: Excellent, excellent. Another question that we have here, and this will probably be you, Brian, or Rebecca if you want to chime in. I think this is super valuable for you guys, Rebecca. And the question is, “How is this solution licensed?”
Brian: Well, it’s licensed based on capacity requirements and functionality needs. So, there’s different features within each of the different tiers of DataCore. So it really just depends on what your needs are. We’re utilizing auto-tiering right now, which is helping with performance. And that’s one of the really nice features about DataCore is we can tier the storage, so we can put hot data on fast storage and cold data on slow storage, and it’s all done seamlessly under the hood. DataCore does all that for us. So, we really don’t have any knowledge of that happening. All we know is things are performing very well. And it doesn’t cost an arm and a leg to implement that kind of solution. With that, you don’t need to go with an all-flash storage array in most scenarios. This is a great solution to give you the performance where you need it.
Sander: Excellent, and Rebecca, in your case, your ability to add more storage, how does that help you and has been your experience whenever you want to add additional storage? How does that process work for you?
Rebecca: The way that they do their licensing is based on what you have under what storage you have currently. And when you have to add storage, you just go back and get another set of licenses and buy the hardware that goes with it. And it’s just a license key that you update and off you go.
Brian: And there’s also a nice feature within the storage pools within the DataCore software that allows you to do the physical replacement. So say you’re doing a physical swap from your old stuff to new stuff, you can bring it in, and say, replace the physical disk, and it will automatically shuffle the data on the backside of things and it’s very simple. It’s literally a next, next, next, finish operation. So, as far as the effort that goes into doing that, like Rebecca said, you have your software licensing and you present your backend storage to DataCore and then just tell DataCore to replace my old storage with the new storage and it’s done.
Sander: That sounds pretty easy.
Brian: It’s very easy.
Sander: I had another question here, and this is related to the instance where you had to add a little bit of Flash. If either one of you can tell me why did you do that, and how much did you add?
Brian: I don’t remember what our need was. I think we had a couple of—we just wanted to improve overall just speed and performance. So we’re using SSDs, we’re not doing any Flash, actually. It’s just a traditional SSD drives that are onboard on the DataCore servers themselves. And we’ve configured those to be our tier zero in the auto-tiering structure. So, basically, all of our hot data lives there first, and then everything else is traditional spinning disk.
Sander: So you just made a little more performance. You bought a couple of SSDs, you put them in DataCore. And the reason why I was asking you that is because, traditionally, if you needed more performance, you would have to go out and buy a whole enclosure of SSDs, right? So in case, this the advantage that that you guys have.
Brian: Yeah. And I’ve actually done some implementations with Flash onboard the DataCore hosts themselves. And to be honest, the DataCore software does such a great job with caching in the server RAM, that it doesn’t hit the flash nearly as hard as it would without that RAM buffer. So, the RAM in the DataCore can go a long way as far as the performance perspective is concerned. Essentially all of your writes are hitting RAM before they ever hit a storage controller. So they’re hitting at speed of RAM, which is faster than any Solid State.
Sander: Excellent, excellent. All right, so let’s move on to the next the next topic, right? Which is talking specifically about the outcome, right? And, Rebecca, if there’s anything here that you see that you want to talk about. But one of the things that really impresses me is that because of the way DataCore structures or uses auto-tiering, for you to manage multiple types of storage behind it, it has helped you in skipping in a storage, refresh. Can you talk about that?
Rebecca: Well, we extended the lives of our HP MSAs by at least, I think, 4 to 5 years because we didn’t need any more storage as in more size. What we did do is we added the onboard SSDs, so that took care of any other…
Brian: It made up for the backend being old.
Rebecca: Yeah, for the backend being old. That allowed us to budget for it, which in my world, that’s very important because if it’s not your budget, it’s hard to get anything purchased. So, that allowed us to go several years without having to refresh. So we finally did a refresh this last year of the backend, but interestingly enough, we didn’t replace the servers that DataCore runs on. We just replaced the storage with more storage. And next year I’m going to budget for the actual hosts to get them replaced. So, it allows us to spread it out.
Sander: That’s great. That’s absolutely great.
Brian: And then one of the other instances with the HP, it allows us to take back in maintenance—we can do critical maintenance and optimizations on the backend storage and kind of prolong its lifespan without major outages. Because in a traditional SAN setup, you’re pretty much taking the whole data center offline to do a SAN upgrade. You’re taking—routine maintenance could be downtime. And the City of Carmel doesn’t have that luxury. So it’s been one of those things that is a godsend. It really helps us in those times where we really need it.
Sander: Excellent, excellent. Now, there is something else that I wanted to talk about, and that is the fact that you haven’t experienced any storage-related downtime in close to a decade. Can you tell us a little bit more about that? I know you mentioned about doing the upgrades, but what has it been like for you, not to experience, right? Especially if you are the person that is supposed to answer if something goes bad. How has that been for you?
Rebecca: Well, it gives me a little bit of peace of mind. Storage outage is a big deal, keeping everybody up and running. There have been fiber cuts and there have been network issues, but the storage has always been rock solid. So DataCore has been the—I don’t know how to—
Brian: It’s been the glue of everything because we have had backend components within the physical storage fail. Hardware fails, that’s a given. It will fail eventually, and DataCore takes the hit when that does and nobody notices it. It just does his job. Everything stays up and running. Nobody notices that we had problems, and we can work on that at our leisure on a schedule that makes sense to us so we’re not up at 3:00 am in the morning and doing stuff and we could be doing it during the business day. Those are pretty big things from an IT person’s perspective.
Sander: So, just to be clear, what you’re saying is even if you have some type of failure on your HP MSA, business continues to run?
Brian: That’s correct.
Sander: Excellent, excellent. All right, so the one thing that I wanted to talk about from a VMware Cluster perspective, is there in type of integration between DataCore and VMware that you’re running right now? Or how are you handling the failover piece?
Brian: Right now, there’s not really any integration between VMware and DataCore. They’re kind of separate entities from a management perspective. Essentially, we’re just doing a stretched cluster. We have hosts in both sites, and we’re allowing VMware, specifically the high availability component within the HA solution to handle site failures for a compute or network reason. On the backend of that, we have a stretched – our iSCSI networks at the layer two level to allow us to have multipath IO handle that. So VMware handles the MPIO traffic for those DataCore servers. So if site A is down, all the traffic goes to site B, and vice versa. And it’s seamless. And as long as those network paths are there, it works.
Sander: Automated. Is that right?
Brian: It’s automated. Yeah. It’s kind of a set-it-and-forget-it. Once you build it, it’s pretty much you know it’s going to work when you need it to.
Sander: Yeah, because that’s really interesting that you said—although I know we do have some integrations with VMware, in this scenario, you guys are not using any of the integrations. Yet, you have gotten a 100% redundancy that if you lose one site, as you mentioned, right, everything automatically fills over to the second site because you have active-active storage from both ends. So, that’s really, really amazing. That is huge.
Brian: Yeah. And I’ve done these is as far away as 20 miles apart, and it works just as well as it does with the nodes sitting in the same data center together. As long as your network is there and you’ve got low latency, relatively speaking, it’s totally doable. The network obviously, is a foundational component of that. So your network needs to be there for that to work, but as long as you’ve got a solid network, DataCore does an excellent job at mirroring and keeping things redundant and protecting the data, and making it perform well. It performs much better with DataCore in front of it than just the raw hardware behind it, so it gives you a nice boost in performance.
Sander: Excellent. There’s a question here for you, Brian. And the question is, “Does Mirazon serve local cities or agencies in the West Coast?”
Brian: Yes, we do. We actually have customers all over the United States, as well as internationally. So we support customers all over the world.
Sander: Great. Great. And here’s another question. I think we already answered it earlier, but just for the sake of giving this person the answer, right, because we want to make sure that we answer these types of questions. And here it is: “Are you able to do DataCore software upgrades without any outage?”
Brian: Yes.
Rebecca: Yes.
Brian: And you can also do the OS the DataCore runs on and the hardware the DataCore runs on without any outage. So every component of it, like Rebecca mentioned earlier, it gives us the ability to do components, and you can kind of limit your focus, so you don’t have to look at everything as a whole solution. You can say I just want to upgrade Windows Server OS on this DataCore box today. And then next week we’re going to upgrade the networking components, or next month we’re going to do the backend storage. So you get to pick and choose when you want to do those versus doing it all at once and having the hardware forklift, everything. So, it’s very flexible and software updates upgrades, OS upgrades, hardware upgrades, pretty much any kind of upgrade within the DataCore solution is without downtime.
Sander: Excellent, excellent. And there was another question here again, Rebecca, and I know you kind of talked about it at the beginning. What were the key factors that distinguished DataCore for you versus the competitors? I know you mentioned a few things. Do you mind mentioning that again?
Rebecca: The flex of money. Flexibility, that’s always a key one for me. We have to be fiscally responsible. This is taxpayer money, and I’m a tax payer, and I want to be fiscally responsible. The ease of usage on different hardware platforms, we wanted to be able to go to different things if we needed to, the ease of the components. Like he just said, if we want to upgrade this piece of it, we can, and it doesn’t affect DataCore. Does that answer? How’s that?
Sander: Yeah, absolutely. Absolutely. So thank you for that, Rebecca. Okay. So, now that we understand the solution, right, and we understand how, all the great things that it’s done for City of Carmel, right, the results, Brian, based on the picture that we have in that diagram, would that be a fair diagram of what your environment looks like?
Brian: Yeah. We basically got it split right down the middle. It’s kind of a half-and-half. You put 50% of the resources in one side and 50 on the other side. And we can also use things within the VMware platform to pin things to data centers, because if you’re familiar with a distributed resource scheduler within VMware, it likes to shuffle stuff around. But it has no concept of, this building is 10 miles away from me. So, you have to use those types of tools to pin workloads. So, we have some scenarios where we want things to be super highly available. For example, we have a web server farm, you’d probably want half of it at one site, half of it at the other site. So, if we did lose a site, nothing misses a beat. So, you can split virtual machines based on rules that you’ve defined within VMware and kind of pin those. And you can also automatically pin things where they want to go and automate that so you’re not manually placing things.
Sander: Yep, absolutely. That’s great. And obviously, over the last decade, there’s been growth, right, Rebecca? And when it comes down to your growth as far as growing from 30 VMs to 80 or possibly more, what have you had to do? If you can take us through that, have you just added more VMs or more hosts? Can you tell us a little bit about that growth?
Rebecca: Well, when we added ESX host, we did have to refresh them, and we extended the resources on there. But one of the things about DataCore storage is you can reuse it. You can reclaim it and give it to another VM. So that has—I use DataCore for the thin client side of it. I don’t use it in VMware at all. I don’t even pay attention to that over on that side. DataCore allows me to really do a lot of thin provisioning and extend my storage.
Sander: Excellent, excellent. And you mentioned the money, right, that you have to be fiscally responsible for that. One of the questions I have for you is as far as these last 10-12 years, when it comes down to having to upgrade DataCore, have you had to have to purchase DataCore again? I know you started with SANMelody. Can you tell us about that process? And have you had to buy the software all over again?
Rebecca: No, you don’t buy the software over again. As long as you keep your support online, you just have to buy the extra storage or the additional storage you’re putting on. So you don’t lose the actual license.
Brian: Like a lot of other SAN vendors, typically an appliance based SAN solution, when you upgrade the hardware, you also forklift and upgrade the software and have to buy it new again. DataCore is not that at all, which is one of the nice things about it is you still have that flexibility, but you don’t have to pay for over and over again.
Rebecca: Right. And that has helped quite a bit. Yeah.
Sander: Excellent. All right, great. So yeah, thanks definitely for sharing that. And one last thing I wanted to ask you here, Rebecca, as far as your experience with DataCore support, you’ve probably talked to a lot of people over the last 10 years. How has that relationship been? How has DataCore been able to help you?
Rebecca: DataCore has been great. They’re always there. They always answer. I know there was one time that we had something we didn’t understand, and they kept working with us until we got it fixed. I don’t have too many other vendors that do that.
Brian: I have a few similar stories or experiences, at least from the support aspect of things. I’ve had a customer who had a hardware problem. We’re not going to name any names, but it was definitely not a DataCore problem. However, the DataCore support folks stayed on this ticket, and stayed on the case until it was resolved. Even though they knew good and well, it wasn’t their problem, but they helped get us to a solution. They didn’t point fingers. They didn’t play the “Oh, it’s not my problem,” game. They made sure that the problem was solved. They don’t care whose problem it is. At the end of the day, the support guys are there to make sure you’re up and running.
Rebecca: Yeah. And quite frankly, I’ve had the same experience with Mirazon. When I’ve called, they’ve been there.
Brian: Thank you.
Sander: Excellent. That’s good. It’s a perfect match, right?
Rebecca: It’s been a good relationship over the years for both of us.
Sander: I’m glad to hear that, Rebecca. Now we just got to get you down to Fort Lauderdale, right, just so that you can come and meet the team in-person one of these days.
Rebecca: Actually, I was down there many years ago.
Sander: Oh, you did? Okay. Excellent. Maybe I wasn’t with DataCore yet.
Rebecca: Probably not. This was way back in the very beginning.
Sander: That’s good. That’s good. So, your process of upgrading, you started with SANMelody?
Rebecca: Uh-huh. Yes.
Sander: And then you went on to SANSymphony and then SANSymphony-V?
Brian: Yeah. We’re on the SANSymphony-V. I believe we’re on one of the latest versions at this point.
Rebecca: Yes.
Brian: Yeah.
Sander: Okay. So yeah, the, those are [crosstalk].
Brian: That’s for every single product, every single product from SANMelody all the way up.
Sander: That’s great. That’s great. Excellent. Okay. So, maybe—I have a question here, and I will probably let Brian answer this one, and the question is, “What is metro cluster?” I don’t know if they’re just joking, but let’s answer it.
Brian: That’s a great question. A metro cluster is essentially where you would split up your compute and your backend storage resources into multiple data centers within a metropolitan region. Metro is the keyword in this phrase because there are some limits to networking. The further away you go, the slower things get. So, you have to find a reasonable distance when you’re talking about a stretched implementation like this. What makes the most sense for your workload? What makes most sense for your business? And figure out the logistics, but metro basically means that your cluster is stretched within a metropolitan area. So, we’re talking anywhere from a couple miles to about 20 to 30 miles, depending on your network connectivity.
Sander: Yeah. You mentioned something key there, right, stretch configuration. That may be another name that they may know it as, right, stretch configuration?
Brian: Correct.
Sander: Okay. Excellent. All right, so we have about 15 minutes remaining. If you guys have any additional questions, feel free to submit them now. We’re at the end of the webinar here, so we have a few more minutes to address any other questions that you may have in mind.
In the meantime, let’s go ahead and do a quick summary, right, of the success story here, right? Number one, you’ve been able to be—you have these high availability solutions with software-defined storage, and you’ve been doing it for 9 years without any outage overall, about 12, 13 years in total. And then if we look at the hardware refresh savings as you mentioned, right, you were able to prolong the HP MSA life expectancy up to 8-9 years. So, that’s basically 80% additional time that you were able to get out of that hardware, which in turn, that means you were able to save those dollars for those 4 or 5 years.
The flexibility, right? You can use whatever hardware, the VM, so that you can continue to grow this and just add more hosts without having to replace the backend storage. It’s fully agnostic. Whatever it is that you add today, you would work, any server, any hypervisor. And I think the part that Rebecca loves is the no-forklift upgrades, right? It’s just a matter of replacing components and not the whole solution. And as far as performance, as Brian had mentioned, we do have a caching by using memory, which it’s pretty good as far as providing performance and CPU prioritzation. But if you needed more performance, you can definitely just add a tad bit of SSDs, and that automatically becomes your tier zero, as you mentioned, right, to give you that additional boost of performance.
So overall, I think it’s been a great run, and at the same time, I know you’re going to replace your DataCore servers next year as you mentioned. And obviously, there are some options that you’re going to have, and you’re going to see in a little bit, Rebecca. But before we go there I just want to, quickly, for those of you that may not be familiar with software-defined storage what it basically is, it’s a layer of intelligence that runs on top of x86 hardware, or if you want to run it on a VM inside of a hypervisor, you could do that as well. So you have the flexibility of running as a bare-metal solution or as a virtualized solution on a VM. And so the minute you allow that layer of intelligence to take over, basically, and begin handling all your hardware, both from the compute to the storage, then now you allow the software to basically enhance the effectiveness of that hardware, including the ability to auto-tier, right?
So, it’s all about flexibility at the end of the day, right? And you continue to gain more benefits, right, as you continue to add onto your environment as far as adding more of the features. That very concrete package, but sometimes not everyone uses every single feature right away. And so you can continue to add more of these features, and it would definitely help you in the long run. And at the end, it’s all about making sure that you don’t waste any of your storage, right? You consolidate all the storage underneath. You allow the software to allow communications to take place between those storage regardless of who the vendor is, right? It would allow you to create that pool of virtualized storage.
All right. So, let’s see. Let’s quickly take a look at this here, and if I was to describe your environment, Rebecca, you are probably on the left-hand side. You have the external storage privatization with the HP MSAs underneath. The difference is that it is not in the same data center, but it’s across two data centers and that’s what makes it a metro cluster. But from what you see on the screen, you can see that there’s an evolution process for many of our customers. And we have customers today that have started on the left side, and they’ve ended up on the far right-hand side, or we’ve had customers that started on the far right-hand side. So, it all depends on the situation, the dynamics of the environment, the hardware, and also what type of environment they have. But you definitely have the ability to evolve as you grow.
And quickly here, I know, Brian, that you guys serve multiple clients, and in fact, you serve as multiple DataCore clients as well. Are any of these different models, deployment models that you’re familiar with, that you would recommend to any particular industry, or recommend for any particular industry?
Brian: Yeah. Every solution on this slide kind of has its place. We have more customers on the left end side of this slide because we’re dealing with some banking, insurance, municipalities that require that level of stretched up time and multiple-site access. But we also do have a handful of customers using the hyperconverged model, which is a really great way to scale your environment. It is simple way to scale things out. So we’ve seen both, but I’ve also got just very, very tiny, small customers that are running on a single note of DataCore. We don’t recommend that for high availability reasons, but it does work. So we’ve seen it from small to large and everything in between. But the customers that need the highest level of uptime are usually going with the stretched implementation.
Sander: Excellent, excellent. And from your experience as far as the migration process to at least the two that you have experience with, how much downtime does that require then?
Brian: None.
Sander: Okay. That was pretty straight forward. Excellent.
Brian: It doesn’t require downtime. The way these solutions are designed is you can integrate them, at least from the VMware partnership and DataCore. There are ways to do all of it with zero downtime.
Sander: Okay. Excellent, excellent. All right. Perfect. And so just to close it up, I know, Rebecca, you gave us a couple of quotes here on what DataCore has done for you, but what would be your last words to someone that may be in the same situation as you were 12 years ago, and they also are in charge on a municipality, right, some type of seat of government, what would be your recommendation to them as far as giving DataCore a try?
Rebecca: If you need high availability and generic hardware and you need to purchase fiscally-responsible hardware, try DataCore. It works on Windows and is highly available, and not very—it’s a good solution for your bottom line.
Brian: I think the biggest piece of it is it’s flexible.
Rebecca: That too.
Brian: We talked fiber channel and iSCSI earlier, saying we want to go from iSCSI to fiber channel. That’s an option. You can just flip flop over to that new protocol, pretty much the same way everything else works with no downtime. So it gives you a lot of options that you don’t have in a traditional storage array.
Sander: Excellent. I appreciate that. And so we have a few more minutes left before I pick the winner for today for the giveaway, right? There are a couple of questions that came up last-minute. And if, Brian, you can answer this, it is, “What is VSAN? What is DataCore VSAN?”
Brian: VSAN is basically DataCore running in a hyperconverged solution. I think it’s similar to—VMware has got their version of it, they call it VSAN. “Virtual Storage Area Network” is literally what it stands for. But essentially it’s running DataCore software in a virtual machine and it allows you to build a hyperconverged virtualization platform. That’s pretty much it, a different flavor of the software.
Sander: Excellent. And there’s a question which I think I’ll take that one. It says, “How does DataCore determine pricing?” And it’s pretty straight forward, and I think Rebecca talked about it as well. It’s just based on how many terabytes you’re managing, right? If you have 50 terabytes, and if you’re having this in a highly available environment, 50 on each side, and you have 2 copies of your data, then all you’re doing is you’re licensing 50 terabytes on each side. Then there are a couple of variations on the licensing depending if you’re doing a hyperconverged, or if you’re doing bare metal, and you get to decide which one brings the most benefit for your particular environment, and then just determine how many terabytes you need. And that’s pretty much it. That’s a pretty straightforward formula.
All right, so let’s go ahead and get this finished unless you guys have any other comments before we wrap it up.
Brian: I’m good.
Sander: Okay, great. Well, let me tell you, I appreciate you guys just answering these questions as they come, right? I know that there’s nothing pre-scripted, as well as the questions that we answered. So, sometimes it could be a little nerve-wracking, but at the same time I know that you guys are very knowledgeable about these environments. So, I truly appreciate you being able to answer all these questions.
Brian: Yeah, no problem.
Rebecca: No problem.
Sander: Excellent. All right, so we already have the winner. And to wrap it up the last thing I just wanted to mention is there are some materials that you can download, right? There are some attachments here. There’s a PDF of this presentation that you can download. Just go to the area where it says attachment, and then there are additional white papers. I think we have an eBook there. And if whatever you saw today really draws your attention and it’s something that you want to learn more about, please feel free to request a live demo, right? We would have one of our engineers get in contact with you and just understand your scenario of what exactly it is that you need help with and he’ll just tell you exactly if we can help you or not, which in most cases, I would say 99% of the cases, we are able to help.
So thank you everyone for being on the webcast. Thank you, Rebecca. Thank you, Brian. You guys have been great. And everyone, if you guys want to see the replay, you should be getting an email to watch a replay. I know there’s some people that join late. The replay will be available immediately after we’re done here. If you guys want to continue hearing more of these success stories, our next one is going to be in April. So, I ask you to just look out for the email and you’ll know exactly when the date and time will be. And again, thanks everyone for being on the call, and you guys have a great weekend.