Webcast Transcript
Mr. Chadwell: Hello, everyone. Good morning, good afternoon wherever you may be. I want to thank everyone for joining the DataCore Storage Success Stories. Today we’re here happy to highlight Architectural Nexus. How they achieve fast application performance on time and on budget. This is the first of the series so thank you all for attending. Looking forward to highlighting one of our customers and one of our partners as we’ve got a pretty interesting unique story to tell you guys here today.
This is really kind of dedicated to some other folks that might be dealing with certain challenges within their environment. So looking forward to getting into the weeds a little bit further today with my panel. I will introduce our partner and our customer and talk a little bit more about DataCore and how our solution was able to help play a role in this transformation. We’ll get into the story on exactly what the challenges were that our customer, Architectural Nexus, was faced with, the solution, the outcome and the results.
So we’ll get into that here in a bit so you can see today’s agenda. And with that, I’ll kind of walk through a little bit more about some of the speakers here and get into a little bit more about the companies. Myself, my name is Michael Chadwell. I am our head of channels here in the U.S. and happy to be accompanied by Kent Hanson, the director of IT for Architectural Nexus. Steve Hunsaker here at DataCore, our director of systems engineering. And Jonathan Kendrick of USI who is our partner of the month to help us with this account and help Kent really overcome a few challenges.
I just want to highlight a few housekeeping rules for everyone here on the call. Today will be a panel discussion. I want to make sure everyone feels empowered to put a few questions in the chat window. You’ll see that on the right. Today’s presentation is being recorded and will be available on demand which you will be receiving an email from BrightTALK at the conclusion of the webinar.
Also please note that we do have a few attachments within today’s presentation as well that you could take home with you guys here. There’s a white paper describing this use case in a bit further detail. So with that, I’m going to go ahead and kick things off. And part of the panel, I want to introduce to you guys Kent Hanson. Kent, if you would please, tell us a little bit more about your roll and a little bit more about Architectural Nexus and the company itself, please.
Mr. Hanson: Sure. So hello everyone. My name is Kent Hanson. I am director of IT for Architectural Nexus as was stated. What we do, we’re an architecture company based out of Salt Lake City. And we have an office in Sacramento. We started in 1985. Recently we have been very focused on green building and the green building challenge. So in order to walk the walk rather than just talk the talk, we built our own office in Sacramento and it was the first living certified building.
And what that means, if you don’t know what living certified is is we give nothing to the city and we take nothing from the city. We produce all of our own energy. We capture our own rain water and filter it for anything that’s used in the building. We give nothing back to the city in terms of sewage or anything else. It’s all composted on site. So it’s a true living building and we produce 125 percent of the energy that we consume. That has led us to helping other customers and helping them develop and drive their energy model to reduce the energy consumption that is needed to run a building if you will. We help companies all over the United States. So that’s what we do.
Our challenge then was not only to build a building that was a living building certified but also be able to work and do the work that we do which is very heavily processed and computer hungry. With that, I’ll hand it back.
Mr. Chadwell: Yeah, yeah, absolutely. Looking forward to digging into that and a lot of folks here on the call, some of you are in architecture firms yourself or in very data intensive type environments. So we’ll let Kent talk a little bit more about that here in just a minute. I want to turn it over and highlight our partner here on the call that works tightly with Architectural Nexus, Jonathan, if you would please. Introduce yourself, tell us a little bit more about Universal Systems and maybe a little bit about what excites you about today?
Mr. Kendrick: Thank you for that. My name is Jonathan Kendrick, and I’m the director of business development at Universal Systems Incorporated or USI. We specialize in doing OEM manufacturing and OEM builds that we do that for a variety of different customers and sometimes we help them brand it, but we also have a brand of our own, our Universal line. And when it comes to that, we consider ourselves to be not just a manufacture in that point but we like to provide solutions and help our customers with whatever problems they’re going to encounter and help them design the right architecture for their environment.
Which is where we kind of pick up the story where Kent and I got into the DataCore sand and using that in his environment. I would be remiss if I didn’t say that as an OEM manufacture. we’ve had the opportunity to see a lot of different storage solutions, pre-packaged solutions, boxes. We’ve vetted many different solutions and when it came down to it, just with the feature set, the availability to put DataCore into any environment to build scalable solutions both hardware and software and meet the customer needs. That was a hands-down for us partnering with DataCore about eight years ago. So we’ve built many architectures surrounding this storage solution and built some built-ins with that product.
Mr. Chadwell: Awesome. Yeah, thanks for that Jonathan. I’m already getting questions in the chat and happy to say that Jonathan with USI is helping us promote and run a DataCore user group in Salt Lake early next month on February 7th. So happy to have you on board. Happy to have you and Kent chat about how we have really accomplished something pretty amazing there at Architectural Nexus. Before I get into the weeds a little bit further, I do just want to introduce Steve Hunsaker, our resident expert on software defined storage here at DataCore.
Steve, do you want to tell everyone a little bit more briefly about us and then we’ll dig into a little more on the discussion with Kent?
Mr. Hunsaker: Absolutely. Thanks everyone for attending. My name is Steve Hunsaker, and I am the director of Solution Architecture for the Americas and I’m delighted to tell you more about DataCore and our product. To put it simply, for 21 years we’ve been providing software to the world where they can bring their own hardware to the table. In this case, USI delivered them a super micro box. Because we’ve been doing it for 21 years, we believe we’re the authority.
We’ve pioneered the technology that eventually led from what we originally called software driven architecture to where the industry has decided to put us in a category called software-defined storage. We are worldwide. We have many delighted customers. I think the 95 percent renewal rate speaks for itself. Wonderful customer base who is satisfied with not only what we’re able to deliver them from performance and cost but also eager to renew and continue to have the perpetual license that we offer. We’re headquartered in Fort Lauderdale, Florida.
Actually that’s where I’m at right now and meeting with all kinds of people who have energy and passion and really feel like we’re one of the few manufacturers on the planet that is able to swim in blue water and really innovate the market and bring socials like this that we deliver for Kent to the table. I want to just thank Kent. I’ve been working with him for a few years. He’s a great guy, and looking forward to this presentation.
Mr. Chadwell: Thanks guys, to everyone. That is the panel for today. As Steve highlighted, we’re grateful to have a local partner like USI. That’s why we are 100 percent a channel company. We view our solution as that smart storage layer that could be used across various industries. And we partner up with the best channel partners we can that have that regional depth and insight to really work closely within their accounts and see essentially what has evolved and to really be the thought leader and essentially that trusted advisor like you’ll see here at Architectural Nexus.
So let’s dig into the weeds a little bit, right. Let’s talk about this storage success story. Right. Let’s talk about the problem and get right into it. So Kent, I think it would be helpful for you to kind of tell us a little bit more around your challenge? I know you said you started — I believe what was it, in the late 80s. Kind of as the data explosion happened, what that did to you, your business and really what kind of prompted this relationship to solve this challenge?
Mr. Hunsaker: So back when I started for what is now Architectural Nexus, I think our whole backup on tape was 35 gigabytes. That was a lot of data back then. Now one project easily eclipses that with the amount of data that is produced when we do one project. So as we grew, we came to the point where we had all of these data islands out on our network and managing it became somewhat difficult and it became obvious that we needed to move to a SAN solution. So we started looking into different products. I remember I’ve been buying hardware from Jonathan for how long, Jonathan, probably 16 years now?
Mr. Kendrick: Yeah, yeah.
Mr. Hunsaker: So it’s been a long time. I remember asking Jonathan, we were just talking about it one day and I said, yeah, we’re going to buy a SAN and he said well, you really ought to look at DataCore and this was probably eight years ago now. At the time, I was like I didn’t understand the software define thing. So we went with one of DataCore’s competitors for our primary data sourcing.
We ended up going with the SAN and ran that for a year or so, but at the same time we were buying the SAN, we also found ourselves in the situation where we had four different offices, actually it was probably five at that time and we needed to have users working on the same project at the same time and opening files and so we started looking at how do we do that? It was right about that time that Autodesk came out and they said you can run Revit which is building information management software and you can run that in Citrix. So we started looking at Citrix.
So we had our other SAN running and we have VMware running on that and then we started going into the Citrix route. We had already dropped a lot of money on this computer. It was expensive and it was running but we now found ourselves in a situation where we needed to have these teams working on these projects and we needed to have — I would call it a virtual team if you will. We would have people in Sacramento working on projects in Salt Lake and vice versa and different drafters would come up and have a different time schedule and be able to, you know, have some time free available.
Well, how do we get them working in Sacramento? How do we get them working on the project in Salt Lake and drafting and what not? So Citrix became the obvious solution. As we started looking into Citrix and what was required there, we realized we needed some fast storage. So we went down and bought basically a bank o 15000 RPM hard drives. I want to say we bought six of them or eight of them on our first go around. And it ran one or two users really well. You could actually even get it up to about 10 users and nobody complained, but when we started pushing it harder than that, boy the moaning you could hear from across state lines.
It was pretty bad how slow it got rather quickly. I happen to be back down talking to Jonathan again about some more hardware and he brought up DataCore again. I thought well, and Jonathan, to his credit, let’s just do a pilot. What’s it going to hurt? All right. You come out, do a pilot and he came out and did his pilot. And the moaning kind of died. It went away and we were able to add more people to the SAN.
Basically it was taking software and laying it over hardware that we’d already purchased and it was enterprise class hardware but it wasn’t a SAN. But by purchasing DataCore and laying it over the top of it, it suddenly became a SAN and I at the time didn’t realize how powerful it was. As we started to grow, more and more people hey can I get a desktop there, can I get a desktop there?
One day I don’t know how long ago it was, but I started noticing that the IOPS on our SAN were routinely hitting 55,000 IOPS. Then one day, to my surprise, we get 372,000 IOPS on our SAN. On this little SAN that was almost an afterthought that just blew me away. So at the same time, this data was exploding on our Compellent and we needed to start to expand that. I’m moving, I’m kind of hopping here. So if you need me to backup, whatever.
Mr. Chadwell: No, no, no, you’re fine.
Mr. Hunsaker: The odd thing is we had to upgrade the data on our other SAN. I’m sorry. We needed to have more storage added to it because we were running out of terabytes. So I called our reseller and I said hey, give me a quote to bump our 2 terabyte drives to 4 terabyte drives. And when I got the quote back, I almost fell out of my chair because it was more than what I had paid for DataCore and for the hard drives in my DataCore SAN. In fact, it was four times the going rate of a hard drive is what the quote came back as. I called them up and I said I think there was a mistake and they’re like no, these are — you have to understand, these are grade A hard drives, right off the press and you get the cream of the crop.
Well, if I’m getting the cream of the crop, why are they failing at twice the rate that my DataCore SAN? For every drive I was losing on DataCore, I was losing two on my other SAN and changing those out. And we were hitting them harder. So things progressed a little bit more so I said you know what, we’re going to hold off on it for right now. We’ll move the data off to the side and store it in a different area.
We started an archive process to do that. Then it came time to — it was not too long ago I was told that our SAN, competitor SAN was end of life and it was going to go into support and I basically needed to forklift the hardware.
Mr. Chadwell: Let’s stop there, Kent. I just want to kind of recap what I’ve heard and kind of get Steve’s take on some of this as well. I think that this story is really interesting because you said as your business scaled collaboration was a necessary point, and in order for folks to be able to download files, work on projects across cities, across geographies, these files were getting bigger and it was essentially a strain on — it all fell on you and your team to essentially produce.
And after inserting DataCore and through USIs direction, your guys and specifically your architect which most people know are not cheap headcounts by any means. We’re sitting around waiting to download files or open files or save files, well, now all of a sudden they’re able to collaborate faster, you’re able to —
Mr. Hunsaker: Let me talk to that for a second because that’s an interesting point. At one point, we’d been opening auto cad files, you know, one, two, three megabyte files and they would open and work on and save and that. Now, in Revit because it’s all done and can be turned into 3D models and what not, we’re now opening files that are 300 meg to 500 meg. It’s not unheard of to get the 600 megabytes.
When you click on open that file, well, you could have up to six to eight people opening that 600 or 500 meg file. That’s a lot of data moving across and coming off the hard drive suddenly. A lot more than one or two meg device that were coming off prior to. So it became a real problem. A hurry up and wait. It was —
Mr. Chadwell: I was just going to turn it over to Steve. Steve, what does that do from a density perspective or why does software defines that well with an architecture like this?
Mr. Hunsaker: I think it goes back to the original baited hook I was trying to offer up which is the fact that our software is perpetual. You own it forever. To just say it simply, when you look at Kent and his purchase of Compellent, you’re buying — let’s face it guys. The storage arrays out there that you can go buy, you’re really paying them for their software and really not their hardware.
Though they were really trying to get Kent the price tag for their hardware. But the point is they’re all using commodity hardware. They’re likely not manufacturing any of the hardware. They’re just putting a price tag on their array with their software and that’s how they’re making the money. So in this case because DataCore is truly built with ultimate flexibility in mind, DataCore SAN simply can ingest or aggregate together any fiber channel or iSCSI or direct attach storage into one pool.
So it matters not if you’re using direct attach storage or a Compellent or XYZ or ABC array. So you can grow it out using the environment that you have, build the capacity with commodity drives as Kent indicated, scale up very effectively and then in terms of your question.
Mr. Hanson: You lose no performance.
Mr. Hunsaker: That’s right. And I think it’s important for you, Kent, to kind of spotlight that consolidation ratio kind of going up.
Mr. Hanson: I was just going to say that I get to choose now that I’m — we’re fully DataCore. I get to choose when my hardware goes end of life. Not somebody else. Does that make sense? I can go buy as many hard drives as I want and I can run those as long as I want to and DataCore doesn’t care. I can call them up and get help with their software on drives that are 10 years old and they’ll still help me. Just because SAN manufacture says hey, you’re end of life, you’re end of support — does that make sense what I’m trying to say?
Mr. Chadwell: Oh absolutely. Jonathan, I’d even ask you the question. How does that help your business, right? As a reseller and someone who is a strategic account advisor, how does that story help you?
Mr. Kendrick: Well, to be honest, it can hurt my business too because if I get to resell the software over and over, there’s more revenue. That’s a completely different revenue stream that we’re not tapping into because we’re not charging our customers. To be honest with you, we would probably make a lot more money if we were selling any other brand besides DataCore from the retailer point of view. But we wouldn’t be giving our customers the best solution. We wouldn’t be helping them manage their budget and we wouldn’t be giving them a completely scalable solution that they are in control of.
Mr. Chadwell: Yeah, and by creating that trust like you have with Kent, as he said, he decides when he’s up for hardware refresh. You know he’s going to come back to you, right?
Mr. Kendrick: That’s true. So we get to take care and we get to help on the hardware side but because DataCore is a perpetual license, I’m not reselling him the software side and grouping it together with the hardware is one package which allows him, every time he refreshes that hardware to reap those rewards and benefit on the cost of ownership on that over and over. I think Kent, in your situation, you’ve probably refreshed your hardware, is it two or would this be the third time now?
Mr. Hanson: I think so, yeah. When we started into this, when we get 385,000 IOPS — that was more like 370,000 IOPS to be precise but we were running 15,000 RPM drives. Right now, our first tier and that’s whole other thing we can talk about is the key features and what we actually brought in by using DataCore, but our first tier is FSC now. Everything, every write and every read, if it’s not cached in ram, you know, hit FSB right and we decided when that happened and there wasn’t any extra expense to it, we went down and bought the SSD drive. I guess that was the expense to it and made sure we had the licensing to do that from DataCore and installed it. It’s kind of painless to do.
Mr. Chadwell: And if I can add to that on top of that, you mentioned that we did throw some SSD in there, that’s not just for your storage part. Whatever technology comes around, something that’s not even invented yet, as long as it’ll plug into a PSI express slot, we have the availability to grow that box while it’s still in place. It’s not a closed box that put in a closet and forget about it. We can continually improve that as the industry puts forward faster components.
Mr. Hanson: So let’s talk about that because our next step, because we have SSD, our next step is NVMe which I’m debating when and where, at what point we move to that. So like I said, somebody comes along and they say hey, well, I need to open up this 500 megabyte file and on our competitor stand, a lot of that would come straight from disk and it was fast disk but it still took a long time to open. When you open that first file, DataCore, and correct me if Steve, I don’t want to give anybody any misinformation, but DataCore then caches that file in RAM.
I think the amount of cache that I have in the competitors SAN was 8 gig or something like that. It was not high. The amount of cache that I have in my DataCore is 64 gigabytes of cache. That’s because it’s software defined. I get to choose what that is. So that first person comes along, they open it up. If it’s a recently used file, it’ll come off of the SSD raid and get loaded into cache. The next guy that comes along and if that file is still in cache, it pulls it right out of RAM and I’ve seen, on a routine basis, I’ve seen our production server delivering three gigabytes, 2.4 gigabytes worth of data on our 10-gig network out to our Citrix for people to open.
So when it comes to software define, the granularity and control you have is so far beyond what you get with these can solutions. Back to our competitor and you guys can cut me off anytime, but we had a situation where we had purchased — I think it was 26 terabytes or 24 terabytes of data. Well, we weren’t getting anywhere close to that and so I call them up and I’m like hey, what’s going on?
They look at me and said well, for you to migrate your data from tier one down to tier three, we have to take a daily snapchat and store that snapshot and then we start migrating the data overnight. Well, that was consuming storage that I needed for my production data was the impetus for me to go ask for an upgrade of our hard drive. So in order to make it so that we could get more storage out of our SAN, I had to go delete some of these snapshots.
Well, you start looking at what DataCore brings to the table. You know what, I don’t need a snapshot to run tiering. It does it on the fly as you read and write data. And as older blocks become less hit and less used, it automatically moves it to slower and less expensive disk. However, if I want to snapshot something, I could write a script in DataCore that would snapshot my volumes whenever I wanted them to and store it on different volumes if I wanted to. So I didn’t lose the ability to snapshot but I also don’t lose any storage in the auto tiering of my data.
So that became a plus for us when we started to really compare the two. And then what we realized is DataCore has something called continuous data protection and that’s a feature that you can actually — like on my DataCore SAN, I could roll back to any given point in time in the last two weeks to bring up a volume and get data out of that at any given time and point.
Mr. Chadwell: I had someone say that the audio had cut off however I can hear everyone. Oh okay. Back online. All right. I wanted to make sure we didn’t lose audio there. Didn’t mean to cut you off, Kent. I guess since I did, Steve, do you want to maybe highlight a little more on what Kent is talking about here from moving different tiers of storage and caching to ram and what that means? I mean, I know Kent just described it a little bit but maybe tell us a little bit more around what is going on behind the scenes there.
Mr. Hunsaker: Sure. SANsymphony which is the name of our product does not use disk to cache where I think we find ourselves pretty unique in that arena. Where we actually use ram to cache reads and writes. Of course you have to get it out of volatile memory and write to disk. So what Kent was talking about, these tiers of storage.
Indeed, we can place differing performing drives into the same pool which is kind of contradictory to tradition. And now you build your own tier structure from fastest onto slowest. And when we de-stage and then write to disk, we write to tier one and then our auto tiering is in real time.
It’s not using a crown job or a schedule to dictate when or how those blocks of storage are being moved, whether they be promoted or demoted up and down the tier stack. So really, Kent, talk to us about how you manage auto tiering?
Mr. Hanson: So for us, we have three tiers of data and so people know in DataCore, you can do up to 15 tiers. When I say you decide when your drives become obsolete, you have 15 tiers to work with. So just a brief what’s auto tiering? Auto tiering is you’re using at the byte level, you’re spreading your virtual drives across different types of media.
So for example, tier one for us is SSD. And we have I think eight SSD drives. Then we drop down to our next tier which is 10,000 rpm drives. We have several terabytes of that. And then when the data gets too old and it gets pushed down, it drops down to 7,200 rpm SATA drives. So we’re running three tiers and if you want to count ram as a tier, I don’t know if you would necessarily so we’re just going to say right now that we run three tiers of storage.
You have to understand that let’s say you have a large file or I don’t know, maybe an exchange database that’s very large or something like that, and that exchange database actually would go from, it could in theory, go from our SSD drive and run vertically down through our 10,000 rpm drive and then drop even into our 7,200 rpm drive depending on which blocks or which bytes were getting written and read to continuously. So it stores the data that’s continually getting hit, it automatically moves it to the fastest storage that you have. You’re tier one.
And you get to set that. Let’s say tomorrow I go out and I say hey, Jonathan, I need a bank of NVME drives and we plug that in? Well then I go in and I plug that in as stats attached or however I want to attach it and then I say this is my new tier one. And then DataCore readjusts all of my data so that I have a tier one, a tier two, a tier three and then I would have a tier four at my 7,200 rpm. Does that answer your question? Is that what you’re looking for?
Mr. Hunsaker: Yeah, I think that’s very insightful. And I would ask you to take it maybe a step even further out from a user perspective or the architects or we’ll call them your customers at the end of the day and projects. How does that improve their lives from an application perspective? What does that mean to them?
Mr. Hanson: Let’s say we have a project and these people have a project and they’re creating visual files, large Revit model files, that data in that project directory is automatically stored vertically across those tiers, across those three tiers. However, they go in, they open up that Revit file and it’s pulling. If it was a file that had been opened up in the last couple of days, we’re not sitting around waiting for that file to be opened up six times from disk and possibly slow disk. It’s loading it into ram cache and the next person that comes along and hits it, it’s dropping on the wire as fast as computer technology can drop it on the wire to get it to him.
Mr. Hunsaker: Nice. Quickly, I did have a question here. They were just curious, what was the server config that hosted the metrics with DataCore?
Mr. Hanson: What was the server config — what was the metrics for DataCore? In other words, you want to know what the CPU was and the ram and all that?
Mr. Hunsaker: Citrix, the Citrix config, Kent.
Mr. Hanson: Oh the Citrix config was an E5 and they’re 3.3 gigahertz. I don’t know the exact model.
Mr. Chadwell: I believe they were the 2670, E5 2670 two, you had two in each one.
Mr. Hanson: Yeah, and we had anywhere from 512 gigs of ram to 768 gigs of ram per server.
Mr. Hunsaker: And then, Kent, we also have another question asking the other side which was what was the DataCore config for the servers?
Mr. Hanson: The DataCore servers, they’re built out of Supermicro servers and I don’t know the exact model numbers anymore. They’re 2.6 gigahertz processors, the E5 series. They have 10 drives so we have an active SAN. So we run actually two SANS and by the way, we run those two SANS for less than what it cost us if we were going to forklift and upgrade the competitors SAN. We run those two SANS for less than what it cost to do that. And they’re active-active.
In the middle of the day, I can turn off a SAN and nobody knows because of the way that DataCore functions. But on those two heads, I call them heads that run the actual DataCore software, we have two mirror drives that boot the DataCore head server and then eight two and a half inch SSD drives that are mirrored and that becomes my tier one. And then there’s a SAP attachment into a [J Bod] and both of those servers drop into two different J Bods of two and a half inch drives. And then it drops from there into another SAP attachment into the three and a half inch drive J Bod.
Mr. Chadwell: If I can point out, I know Kent that we’ve done a couple of additions, two or three additions where we’ve added shelves and put in different boxes and moved your tier one, moved the drives, replaced them and you’ve been doing this for six years. And if I can ask a question, this could be dangerous because I don’t know the exact answer, have you ever been down?
Mr. Hanson: Not with DataCore.
Mr. Chadwell: There you go, that’s what I was hoping you were going to say.
Mr. Hanson: Yeah, not with DataCore. Any maintenance and stuff like that, with DataCore you say I’ll take this, my left or my right side or you can have up to — what is it, Steve, 16 nodes or something like that and you can spread out horizontally as well but we only run two of them.
Mr. Hunsaker: Yeah.
Mr. Hanson: They’re very powerful.
Mr. Chadwell: On Scuzzy or fiber channel? We had a question.
Mr. Hanson: We run fiber channel. We run fiber channel. So we run both to be perfectly honest. In between the mirroring that happens between the two servers is 10 gig [ice scuzzy]. And then we have fiber channel out to the VMR servers.
Mr. Kendrick: You know, Kent, I just want to remind you of a story. I don’t know if you remember this or not. It was four or five years ago, you called me up in the middle of the day just to say that you were reviewing your DataCore logs and you had seen where you had had users accessing up to 325,000 IOPS and nobody called you, nobody complained, nothing slowed down. Nobody even noticed that you guys were having a storm like that.
Mr. Hanson: Right. Right. Yeah, I remember that one. Just in full disclosure, 325,000 IOPS was not a typical every day occurrence. It would hit high occasionally. That one was exceptionally high. Our typical run rate, not unusual to hit 25 to 55,000 IOPS depending on what’s going on.
Mr. Chadwell: Would you say it’s fair, I love hearing stories like that but when I think of a story, right, you say increased performance that sounds like A, it helps you sleep at night, it helps your company scale, right, and you guys can effectively take on more projects as you hire new architects and continue to build these living buildings. It essentially helps you have the stability to reach that performance. The high availability component, right, so making sure that business continuity is enabled. Jonathan just highlighted that it’s been six years or so and no downs, no outages with DataCore running on the backend. And from a cost efficiency —
Mr. Hanson: The point is we’ve run six years, no outages with commodity hardware. For us, if there’s any questions, the competitor SAN that I’ve mentioned or referred to, I will say right now is no longer in our — technically it’s still in our data but it’s turned off. We no longer even use it.
We’re complete DataCore all the way through. The ROI and the complete total cost of ownership on it is so much less and in my opinion, is a better package. The only thing we’re doing in the future is moving DataCore. Our next steps this year are we’re going to be looking at asynchronous replication to replicate the data off of our network so people understand. We have a VGP network and we have a huge generator.
Even if the power goes out, we stay up. We’ve lost power several times, we’re always online. We have VGP. We have two different internet providers providing that for us and so our next step is we are our own cloud and we’re going to be replicating our data with DataCore off to our remote sites. So for us, DataCore is just going to become more and more of our piece of production in business and we’ll rely on it more and more.
If you stop and think about software define, what happens when you want to move your data out to Microsoft Azure or you want to run a SAN out in Amazon? Well, if you have software defined, there’s an opportunity there and maybe you can talk to that Steve a little bit. I think it’s supported now, isn’t it, Steve, to run that software in those environments?
Mr. Hunsaker: Right. Because our software is simply software that installs on a Windows server, wherever a Window server resides, whether it’s your garage or you’re a cloud manufacturer it doesn’t matter.
Mr. Hanson: You build your own virtual SAN out in and Microsoft says you’re a cloud or Amazon I suppose. So if you want to re-learn different SANS, I would use different competitors but if you want one solution, my money is on DataCore.
Mr. Chadwell: We got a few questions here. I think we’ve done a pretty good job of painting the story out so far. So any of you that have questions, please feel free to go ahead and continue to put those in the chat. One of the questions, in the experience using large arrays such as EMC VNX behind DataCore?
Mr. Hanson: No, not directly but I will say that I believe with the cache and the algorithm that it would make it faster. I did have somebody from DataCore tell me that there was one of the competitors that wanted to use DataCore as their sort of active-active because they couldn’t do active-active. They could only do active passive. But if you fronted that with DataCore, you could do active-active. So I personally don’t have any experience with EMC or VNX but I do believe it would go faster.
Mr. Kendrick: We do have experience on that and you can absolutely, you can put an EMC or Unity or you can put anything you want behind there as your storage. I won’t steal the thunder here but I know that one of DataCore’s largest customers, it’s not somebody that I was involved with but they are running on EMC, isn’t that correct?
Mr. Chadwell: That’s correct. Yep. I didn’t know if you wanted to add anything else there, Steve, or if you’re comfortable? Do you have anything to add there before I move to the next question?
Mr. Hanson: No, that’s simple. Anything that’s fiber channel or I Scuzzy or directive task can run behind DataCore and we continue to provide our features and the respective products underneath continue to bring their best features to the table and we’re all good. Still there?
Mr. Kendrick: And to be able to run that in case the first site goes down and we’re actually going to use commodity hardware and build something up into super microbox saving the cost of having to purchase the third EMC.
Mr. Chadwell: Getting a lot of questions flooding in here. Any suggestions for the best way to maintain the storage in good shape to prevent disaster happening?
Mr. Hanson: Who do you want to …?
Mr. Hunsaker: I’ll jump in there. I’m happy to respond to that. I’m going to try, if it’s okay, I see the questions coming in, I’m going to try and tackle as many as I can here in one fell swoop. The DataCore’s software resides on Windows server and so you can take that as you want with Windows update. We have a wonderful support site that can guide you through what [hotpicks] from Microsoft have done to Windows and whether or not you should be concerned with it. In other words, how close you should be to the cutting edge.
I think to Kent’s point of being able to test and update whether it be a driver or firmware or something there and maybe Kent you can allude to that. But being able to tackle that on one side of the two sides is very comfortable and confident knowing that you’re not going to bring down your entire SAN because we do have data in two distinct places. So I would answer that question perfectly fine to look after drivers, firmware, Windows updates. DataCore releases product server packs a handful of times a year. So we’re obviously doing some improvements and even releasing new features.
And then I saw a question about licensing. Happy to take that. Is that okay, Michael, if I just kind of fire away at these?
Mr. Chadwell: Yeah, yeah.
Mr. Hunsaker: Instead of taking the time to bounce back.
Mr. Chadwell: People are asking if they can get a copy of this afterwards so yes, you will be able to download this. Keep going.
Mr. Hunsaker: Sure. So we license by terabytes of useable capacity. So if you have 50 terabytes that you want to use, we would provide you with a quote for 100 because we mirror that 50 to another server and that totals 100. How does DataCore compare to Datrium? Datrium is very different that they provide a driver that provides storage to DM ware. We actually are capable and I don’t think Datrium can do this, I can take a fiber channel one, an I Scuzzy one and direct attach storage and aggregate that all together. And as we’ve already painted that picture, auto tier it.
We read and write and cache in ram and we are then capable of not only some other questions that have alluded to, being hyperconverged which the way we define it is actually giving storage to the very host that I am standing on. But also while we’re doing that, being perfectly capable of serving up that disk to an external environment at the same time which I don’t believe Datrium can do.
Let’s see here.
Mr. Chadwell: Will DataCore run on top of any piece of hardware or any that they don’t?
Mr. Hunsaker: I saw that. Yeah. So any X86 server we work really well with. The likes of Lenovo, Supermicro, HP, Dell, we do have a few documents that will point out what does not work. That’s probably a good way to approach that rather than just publishing something with everything that works, it’s easier to find the exceptions. Perfectly happy to provide any labs that you may need. We do have a trial software for 30 days that’s fully functional.
We do offer discounts for non-profits. We have reference architectures on our website built for several, most all of the server hardware manufacturers. We run on Windows server and so in order to run on VMware or Linux, we would be a virtual machine.
[Ken] provisioning is actually something that we invented and have a patent on so yes, we support Ken provisioning and offer that. In other words, our virtual disks can be greater than the cool that you have beneath. And am I missing anything else? Yes, Kent.
Mr. Hanson: Can I just throw in there that DataCore and this is when I really drank the Kool-Aid, is DataCore is no Johnny Come Lately to storage. I think, and correct me if I’m wrong, but wasn’t the beginning at NASA years ago? They started from NASA. I think the engineer started from NASA and grew it from there. I mean, it’s been around for a long time.
Mr. Hunsaker: That’s correct.
Mr. Hanson: Very, very trusted.
Mr. Hunsaker: Michael, am I missing any questions that you’d like to have me follow-up on?
Mr. Chadwell: So yes, everyone will get a copy. Differentiators from leading competitors. I’ll take that one a bit. I think you kind of heard today that because we’re software only, we can run on any hardware. Ideally you’re going to get better cost. We’re definitely going to help with that. The efficiency component that we’ve just heard Kent talk about. Additional performance but most importantly and someone was asking about industries or verticals that we succeed in. Pretty much any high data intensive environment.
Especially anything mission critical, high availability or business continuity. When I look at verticals, I think of hospitals since we have over 2,000 hospitals that rely on DataCore to ensure that their data is always up and always running. When Hurricane Sandy hit, we actually had a hospital whose data center went under water. These guys are in New York and thank goodness that they had the active-active architecture and their other data center across the Hudson was actually up and running fully functional.
That hospital was admitting patients, people were able to run their MRIs and deal with all of the other surgeries and issues and complications that came because of the storm. Whereas others were down. So we see that a lot
Request a quote. Feel free to reach out to — I believe our contact info is in here somewhere. We’ll make sure to send you out our contact info. It’s right here on this last slide, info@datacore.com. Feel free to shoot a note out to any of us here on the webcast. For me, it’s michael.chadwell@datacore.com.
Wanted to also tackle — what was this last question here — anything else I should’ve mentioned there, Steve?
Mr. Hunsaker: No, I mean, there’s a handful of questions that we can probably tackle. I don’t know if we know who it was who asked it but there’s a lot we haven’t answered guys and I appreciate your contribution. My email address is steve.hunsaker@datacore.com.
Mr. Chadwell: Yep, and as mentioned, this is kind of the beginning of a web series here. So please join us for the next one. We will have another customer come on and tell you how they use DataCore with another one of the partners that is more in the central region with Customer First Basis consulting on February 26th. So look forward to having you guys join that one as well. And we’ll be following-up on all the additional questions here. You will get a copy of today’s presentation.
Kent, can’t thank you enough for coming on board, chatting with us today. Jonathan, always great having you and appreciate all the work you’ve done with us over the years. And Steve, you really help us put the authority in software defined storage. So can’t thank the panel enough for today. And thank you guys for joining us there virtually. Looking forward to working with you guys further and working to get in touch. Please view the attachments and links within the webcast as well. Feel free to download some of that. Those of you requesting some materials, we do have the case study and a few other more technical documents as well for you there. With that, I know we are at the top of the hour. I want to say thanks and wish everyone a wonderful rest of the week. Go ahead, Carlos.
Carlos: I’d like to announce the winner of the $200 Amazon gift card. The winner is [John Casella] from Pulte Group. We’ll be in touch with you and again, thank you everyone for attending. Have a great day.
Mr. Chadwell: Thank you, everyone.
Mr. Hunsaker: Awesome. Thanks, everyone.