Mike Matchett: Hi. I'm Mike Matchett with Small World Big Data and today I've got a very exciting conversation going with Mark Lewis the CEO of Violin, which used to be Violin Memory now it's Violin Systems. They're coming back to offer the market the fastest flash system we're going to talk about extreme performance systems welcome Mark.
Mark Lewis: Hey Mike great to talk with you.
Mike Matchett: All right. So Violin has always been known as a fast flash array maker. What's kind of the new take on it now coming back reinventing yourself in some some way into the market what's happening with this new these new solutions you're bringing out.
Check out the full video here: https://www.truthinit.com/index.php/video/2035/violin-systems-storage-for-extreme-performance/
Mark Lewis: Yeah great question. So you know a big part of Violin today is we want to really be focused on one segment of the market and deliver to that suite of customers. You know the best product in the world. And for us that's the Tier 0. You know it's the high performance extreme performance storage market and Violin has already the best technology there. And we're just really focusing in honing in on that market. Where, you know, on our differentiation is kind of simple we're going to give you the world's fastest storage with all of the enterprise data services that you can get from any other supplier in the market today. So it's not just speed but it's speed with all the data services.
Mike Matchett: So let's just dive down just a little bit quickly. When we say performance we're not simply talking about how many millions IOPS you can throw out. It's not really how much bandwidth that's part of it. There are thresholds are really concerned about what I call response time what the market here tends to call latency, right? This is about a consistent latency which is really what performance means to people who study performance in latency as the great I call it the gift that keeps on giving.
Mark Lewis: Once you hit a certain threshold of bandwidth you know everybody knows this with their phones you need a certain amount of bandwidth to talk on your phone or watch Netflix or whatever but ten times that bandwidth doesn't help you watch make a better movie. It's all the same so you need thresholds of performance in IOPS and bandwidth latency is one of those things that you can make it shorter. It's better. And I always say you know it's like commuting to work. You know the bandwidth is how many lanes you got on the highway IOPS is how many cars the highway can take and latency is how long it takes to get to work. And that's what you really care about. And that's what really drives performance and the shorter you can make it, right the shorter I can make my commute, the better it is.
Mike Matchett: And we get high utilization of cars crowded lanes mean slower latency. But but at the pure service level, if we talk about the servicing of that IO, you're constrained to your service time when latency and so the boxes that can deliver faster latency gives you better performance. And in addition there's a subtlety there that says if I have a lot of faster performance transactions I don't need as many lanes I don't need as much capacity. I can actually save money on the back end by having something has a faster service time. And Mike that's the key now. Folks often don't understand is that if I can quadruple your performance from a latency perspective perspective in storage you could potentially have your server infrastructure environment because you don't have all those servers sitting around waiting for data. And and when servers are sitting around they're still consuming CPU they're still consuming memory. They're just waiting for stuff to happen. We eliminate those wait times. That changes the paradigm of servers. And what do you do on servers you license software and you oftentimes license very expensive software like Oracle like SQL Serve, VMware and they're all tied to the size of the CPU.
Mark Lewis: All right and the savings can be amazing. And we have folks all the time that come in and said you know we saved three times the price of the your storage system just in infrastructure savings just by changing our storage. So it's a very powerful thing and it's something we want to, you know try to get the message out more and more.
Mike Matchett: You're going to focus back on the top of that that performance pyramid and say we're going to give you the best performance possible than anybody else. And if people really take a look at that they might find that has benefits that outweigh going with a slower flash or something else. In fact they may turn out to have to actually tier my flash out of my data center. I want I want just flash level tiering because I need to provide that extreme performance up here and you know all flash is not just one level of flash anymore.
Mark Lewis: Well you're totally right Mike. And you know that's what we always say we deliver the same performance improvement that you know, folks got from going from disk to flash...from a classic flash array today by one of the big vendors. You go to the Violin system you're going to see that same performance jump. And so if it's if it's an active database or VM Ware application we can have a significant impact and whether you call that Tier 0 or Tier 1 storage we believe that we can we can help any app that has that particular characteristic.
Mike Matchett: Awesome part of that part of what I have to say the sub-branding a Violin to be the extreme performance it's really return to the roots. I think it from from my experience with Violin over the years is a new version of storage it's coming out that really focuses on extreme by a couple of measures. What are you doing in this new solution to really say we're taking back that or taking back the forefront of this.
Mark Lewis: Yeah. So you know one of the things is coming back as Violin Memory we wanted to show folks and everyone that we are still want to be on the edge of innovation and so we did a few things that we wanted to make sure we had like everyone else like adding in predictive analytics and cloud analytics into all of our systems. That's in some of the other arrays today. I acknowledge that and it's something we wanted to have as well. We're staying ahead on the performance game I've told the team that you know that's that's the number one focus area for us. So you're going to see us, you know latencies sub 100 microsecond latencies now and getting into the latencies as we add NVMe which is all enabled in this box so NVMe over fabric, NVme devices as we add in all of those we're going to be talking in the area 50 microseconds on the on the lower side. So incredible latency and performance. And there's some cool new things coming out in the next generation. Probably the most interesting one is that we're going to use augmented reality for servicing and monitoring of the systems where you can hold your iPhone literally up to the box and see the temperature or see which module, if something needs to be replaced. And one of the customers said this is the first actual use of augmented reality that makes any sense. So we're really pleased with that.
Mike Matchett: That sounds like a lot of fun. And this new NVMe kind of focused you know we didn't call it flash engineered I don't you know. No one's using the term NVMe engineered but really engineering for the cutting edge of solid state here right. And looking ahead it's coming with all the storage services that you guys have long brought to flash. This is not leaving something behind saying here's here's a bleeding edge box and in a few months or years we'll get the starters up you're bringing this with all the dedupe and all the snaps thing clones and all the other stuff right. This is belief.
Mark Lewis: Yeah. Even more cool features. Probably one of the coolest is what we call "selectable de-dupe." So if you have an ultra high performance need and you don't want de-duplication to occur at all you can select that now at the LUN level so you don't have do it. Previously you had to buy two different systems and focus one one way and one the other. So that gives you a chance to say I've got this ultra high performance portion of my database. I don't want to be de-dupe it. And then I've got this general portion of my database. Oh that's more capacity I can de-dupe that. So in the same box we can do both. And yes to your original question. All asynchronous/synchronous replication, thin provisioning, all of the checkbox features, all there day one and in the new system here.
Mike Matchett: We talked briefly before this and we start getting into the workloads that you would target with extreme performance and at one level think well okay there's just this mission critical apps at the high frequency trading comes to mind you know or something about flying the drones over the desert and we've got to make sure we don't crash things at that. But just briefly tell me what are some of the the use cases you see emerging that extreme performance market would cover now going forward.
Mark Lewis: One of the simplest ways anything that is running in the storage area network or block storage is a candidate for extreme performance storage. At the broad terms today that would say it was running on top of a database or is running on top of VMware there's a good likelihood that we can have any impact on that application. Now examples of where the market is going so that the market today say that database online transaction processing, ERP, supply mixed data center tier Tier 1 mission critical apps right. And the new areas that are cool couple IoT and machine learning. So as we look at those now what's interesting is for IoT what we're being told is is if you get into a machine to machine interaction you know when you used to interact with the cloud or you or I would interact with the cloud and we click on to watch our movie on Netflix if there's a half second delay to start in that movie we don't really care. When you start getting machine to machine transactions like IoT and lots of small interactions like the temperature sensor constantly wanting to say you know tell the building how warm it is here. You have to have very low latency because these are machines talking to each other. And if you multiply that all out. So we think IoT is actually going to be a big resurgence of growth in transaction processing. And likewise with artificial intelligence, it's not necessarily the deep learning that requires the ultra low latency, it's the actual implementation of what we call the inference engines the things that actually take the rules and run them right. What you run inside of your car to avoid collisions. Your car isn't doing deep learning your car is taking that learning and making sure it doesn't hit things. And so as we look at all of these devices and consumer fraud protection and everything else there is going to be a large we think uptick in low latency requirements around the A.I/M.L. space.
Mike Matchett: Particularly that again it gets close that real time edge of I have to control things I don't react to things I want to be there. Latencies matters as always. I think we're running out of time. I think we've got a hundred things we could probably talk about at some point like but thank you for being here today.
Mark Lewis: You're very welcome. Mike I really enjoyed the conversation. Thanks for having me.
Mike Matchett: And I'll give you the last word here. Websites website's great I'm sure anything else people should know if they if they're looking for more information.
Mark Lewis: Great new product. I don't think ever mentioned the name it's called the XVS8 is the next generation product. It is awesome. I guarantee you if you put one in your system you're going to be blown away. And I'll tell you this I'll make a guarantee to everyone who listens to this. If you're not overwhelmed and literally take the POC and buy that unit from me I'll buy you a drone.
Mike Matchett: Got it!