Hyper Convergence & Virtualization: Lets Make IT Simple

Hyper Convergence & Virtualization: Lets Make IT Simple

Hyper Convergence & Virtualization: Let's Make IT Simple

Mike Matchett: I'm Mike Matchett with Small World Big Data and we're gonna talk today about one of my favorite topics which is hyper convergence. How do you bring in converged different layers of the I.T. stack together, really with an eye towards simplifying I.T., making it simple? Simplicity is the watchword.

 

Mike Matchett: We want to take a lot of things off the I.T. table in terms of integration, support, maintenance, management, patching, upgrades. All the things you have to do keep a bunch of different styles working together and instead work with a single intuitive architecture. And especially now, one that can be consistent from our data center, out to the edge, up to the cloud maybe even. We don't want it to be switching architectures all the way along that line. To talk about this, I've got Alan Conboy, the CTO from Scale Computing. Welcome, Alan.

 

Alan Conboy: Hey. Thanks. Good to join you again.

 

Mike Matchett: So Alan, just for the folks watching, give us a little thumbnail history of Scale Computing and hyper convergence, and what you guys were thinking when you built this originally, and where's that now taking you maybe.

The Journey of Scale Computing and HCI

Alan Conboy: Sure, absolutely. So oh, a decade and change ago. We sat down to answer a simple question, why was virtualization so incredibly complex, so expensive? Why did it have to have multiple people from multiple technology disciplines to make it work in the first place. And one of the things we discovered was that it really wasn't designed at all. That highly available virtualization just kind of organically grew out of a science experiment in the mid '90s, taking the concepts of virtual machines from the mainframe space and trying to make them work in the X86 space.

 

Alan Conboy: So what we decided to do very early on was sit down and take a clean sheet of paper approach to how highly available virtualization really should have been designed in the first place. Make it efficient. Make it easy to use. Make it easy to manage. Make it so that you could literally take a kid straight out of the local community college still waving around his A+ Cert like he's the first human to ever get one. And make it so that that guy could stand up a virtual infrastructure that was highly available, fault tolerant, could lose anything anywhere at any time, and have everything just keep going. Make it autonomous if you will. Make it so that virtualization and virtual infrastructures could be approachable by anybody.

 

Mike Matchett: All right. So then we've seen scale really grow. You're hundred percent channel focused so people might not hear about scale computing with IBM or Dell for enterprise data centers. But you've got a serious number of big partnerships now and you're showing up in a lot of places through those partnerships. And now in the cloud perhaps even more visibly. Particularly, you found that your stack, your particular stack, the one that you guys make it scale is highly efficient. Perhaps more efficient than many other stacks out there, right?

 

Alan Conboy: Oh, very. Very, very.

Efficiency Is Key

Mike Matchett: All right. So it's not just that you converge things, you also made it extremely efficient. And part of that efficiency is because of the storage layer which I know we could talk in depth about if we had a lot of time. But where's that efficiency taking you now? Did you find this total efficiency in this package you know you can take it out of the data center, go other places, right?

 

Alan Conboy: Absolutely. And that was kind of something that really struck us as we were designing HC3 was, look, we can make our entire stack, the DR stack, the storage componentry, the management stack, everything fit in four gig of ram in a fraction of a core. Now, what that really has translated to as you start thinking beyond the core data center, beyond the SMB and mid-market space is allows us to do things at the edge that nobody else really can and allows us to fit on devices that nobody else has even really thought of. Things as small as the Intel not running on the oil platform out in the Gulf or the cash register at a grocery store in the back office.

 

Alan Conboy: It has allowed us to really go to the edge with the exact same user experience, the exact same stack. Whether it's the edge layer, the fog layer, the core data center layer, or the cloud and have a fluid compute experience throughout that anybody can do.

 

Mike Matchett: Yeah. All right. So this Intel NUC is what, you know, this big?

 

Alan Conboy: Yeah, give or take.

 

Mike Matchett: You can take a deck of cards as you're saying. Like people can deploy a full hyper convergence stack with your native storage in there and everything else integrate on something that big.

 

Alan Conboy: Yes.

 

Mike Matchett: Yeah. And, you know, all over the data center at the same time. And you just mentioned something I think we should talk about which is up in the cloud. How does it work to take Scale up into the cloud?

 

Alan Conboy: Getting our entire stack to run natively in GCP instances, we took it a step further though and handled the networking too. We were first to market by far with a layer to VXLAN overlay that made those Google resources, that instance running on GCP appear on the customer's local network doing local networking. So that when you're moving a workload from location A to location B, moving it from, you know, On Prem up to GCP, there's no networking changes of any kind. It all just seamlessly goes throughout. Very much that fluid computing model that GO is more a function of need at the moment rather than having to make considerations about what fits where and having to switch between this interface and that interface and this one over here. Just a seamless easy to use infrastructure no matter where it goes.

Using This In The Real World

Mike Matchett: So tell me a little bit about if I have hyper convergence infrastructure, what are some of the really interesting used cases for this that folks haven't really been able to nail down before? With the smallness, the convergence, the efficiency, where do you see somebody really grabbing on to this now?

 

Alan Conboy: You know, the good folks at WinCo in Puerto Rico have a great story to tell about that specific one. They run those small HC3 clusters at every single one of their franchise locations throughout that island. Background on them. They're the folks that franchise all the Applebee's, all of the Wendy's, all of the various restaurants on the island. They run a small edge cluster at every single one of those stores, handling all of the infrastructure requirements of the stores, whether it's employee management, inventory, cash register stuff, all of that. They run a larger cluster at their core data center for computational inputs that data from the edge and then replicated that up to GCP using our cloud unity interface.

 

Alan Conboy: Now, you may recall a couple of years ago or just about a year ago, things got awfully wet on that island. That core infrastructure, that main data center, it was directly impacted by those storms, no longer exists. With a click, they pushed all of those workloads up to GCP. And the same guy that's sitting there, the store manager capturing data at the restaurant, at the local Wendy's, has the exact same interface as the guy doing large computational workload stuff out GCP. It's the same experience throughout. One thing to know, one thing to manage. That's what efficiency gets you, is being able to do real world work that's approachable by people who aren't necessarily I.T. specialists at all.

 

Mike Matchett: Yeah. Which is really to say it's not that we don't need I.T. people but now they can focus on business value more than having to manage and try to integrate disparate silos of architecture and, you know, handhold them and troubleshoot things and stuff.

 

Alan Conboy: Yeah. I would put that this way. What we deliver is an architecture that lets you not have to think about your architecture, spend your time on things that move the business forward.

 A New, Different, and Better Storage Layer

Mike Matchett: Which is great. Let me just drew you back down one little thing. Tell me a little bit more about your storage layer now that you guys wrote. Why is it a little bit different? Why is it different than a lot of the other hyper-converged solutions out there? And why do you think that gives you an edge in new technologies.

 

Alan Conboy: Sure. So the legacy model for storage and virtualization involve SAN storage protocols and a ton of resource consumption. You know rip open the lid of anybody's SAN, I don't really care who's, and you're gonna find a pile of course and a pile of ram that did nothing but handle the overhead of LAN management, storage protocol overhead, et cetera. Well when we sat down to create HC3, one of the first things we wanted to eliminate was all of that overhead.

 

Alan Conboy: We're running around block level data Management engine called Scribe that eliminates all of the hops that you find in legacy infrastructures. That leverages the disks internal to the cluster but does it without needing a virtual SAN appliance. All other guys out there basically just virtualize the SAN and then cookie cutter the copy of it onto each one of the nodes in their architectures, along with all of the CPU and ram resources that SAN consents.

 

Alan Conboy: We didn't just hide the SAN like they did. We eliminated it completely. We maintained all the functionality and benefits that SAN provided but without any of its cost in overhead. We take those -- that same pile of course and ram that used to have to go into running SAN and still does with so many other architectures and put it directly into running workloads which is kind of why you virtualize in the first place.

 

Mike Matchett: Yeah. I mean it sounds like this brings us back around the circle of why you have such an efficient stack and can run out to the edge or in a cloud in the first place. Well, I think that's all we have time for today, Alan. But tell us if I'm interested in this, people watching the show interested in this, obviously you have a website but is there any particular thing we should go look for or do? What's gonna be the next step to learn more about Scale, HC3, and hyper convergence?

 

Alan Conboy: You Know, fun fact here. One of the things that we here at Scale do is we run a couple of times a week an open interactive discussion forum live with myself or one of the other folks here at Scale that no sales guys are invited to. That's literally just an open interactive how would you -- what would you like to know kind of interactive forum that lets people get the answers to whatever they're specifically interested in, about hyper convergence in general or HC3, whether it's a technical deep dive, whether it'll work with what they've got in mind.

 

Mike Matchett: All right. And we could find that forum on your website I'm assuming.

 

Alan Conboy: Yes.

 

Mike Matchett: Awesome. Thank you for being here today, Alan.

 

Alan Conboy: I'm glad to have spent the time with you guys.

 

Mike Matchett: And I love seeing the regional Scale lab behind you there with some of the original equipment and new things also going on. That's awesome. Thank you for watching today. I'm sure we're going to be back with more hyper-convergence information from Scale Computing as they're doing some really great things. Take care, guys.