How to leverage containers in a multi-cloud or hybrid on-prem and cloud strategy to avoid vendor lock-in while heading toward a self-service DevOps model
Hi, I'm Mike Matchett with Small World Big Data. And today, we're gonna talk about some of my favorite things all converged together. That's big data, it's containers, and hyperconvergence of Kubernetes environments. We're gonna have Robin on here in a bit, and they're gonna start telling us about what they're doing lately. First, I just wanna say, look, there's no doubt convergence is coming to the data center. Convergence is coming to the cloud, convergence is the way we're gonna get hybrid cloud built. We're gonna use containers, the world's going towards a container market, so you gotta get ready for that. VMs are great, but containers are the future. And, you gotta look at scale at some point. This is a world where there's gonna be thousands, hundreds of thousands of things, and they're not all gonna be in your data center on your own piece of hardware they're gonna be scattered everywhere. So you gotta handle the management even better.
See the full video here: Hyperconvered Big Data & Containers
With that, let me introduce Premal Buch from the CEO of Robin Systems. Welcome Premal. - Mike, thanks for having me on the call, and that was a great intro. - Alright, so, having said that, when did you guys get started and what was, sort of, your motivation, and it was just a couple of years ago, right? - Yeah, that's right, so we are about a four year old company. We are the hyper-converged Kubernetes platform for big data and databases, so our vision when we started out was to really bring that modern DevOps experience to complex, very big data distributed applications, and that's what we are doing with Kubernets. - And when you started out, you were kind of doing something a little more custom, you've now gone to Kubernetes as a standard, everyone is sort of standing on Kubernetes. How's that going? I mean do people now sort of understand more about what you're really doing? - Yeah, absolutely. So when we started out, we were clear that containers are what we want to build around, because that was the new technology back three years ago. We saw that as a way of really giving the performance that some of these big applications need. We're also giving them the portability and agility. And, with that in mind we're building an end-to-end system. Which had our own container orchestration piece, storage and networking piece. Kubernetes wasn't what it was, it is now, back then. We went through talking to customers, I believe the last year, year and a half we saw this constant refrain that, you know, "great technology, I really want to use this, but I also want it on the back of Kubernetes, because that's where the rest of the organization is going, in terms of status applications and all of that."
So that's where we start focusing on how to bridge the gaps between Kubernetes what it is today, and what it needs to be to handle these applications. And that's what we call hyper-converged Kubernetes. We said that Kubernetes by itself is not enough, you need a whole bunch of other things to manage the storage networking for a lot of the applications that don't really nicely fit the micro-service philosophy. And that's where the hyper-converged angle comes in. And that's what we bring together in a single platform. - And when we talk about this, I know we were talking and you didn't really say enterprise too much, and I hear that a lot from people who sell infrastructure. You guys are really aiming at application developers in terms of your current messaging, all though you sit right in the middle of applications, infrastructure. But, tell me a little bit, and tell our folks, what have you brought to the storage and networking parts, to Kubernetes, to really make this work at the scale for production workloads. - So when you look at applying Kubernetes to the kind of applications I've been talking about, there are few things that are missing that you can't get simply by plugging in the standard software-defined storage volume plugins to Docker and such to Kubernetes. What you really need is the ability to maintain network and storage persistency so that when things move along, you maintain the identity. There are a lot of assumptions in these applications, particularly the ones before Docker came around, which rely on maintaining IP addresses, applications that to the root file system which gets rebuilt normally when you move things to a container surrounding Kubernetes. These are things that break when you use Kubernetes just out of the box.
That's one notions, the just basic ability to maintain, deploy and maintain high availability and resiliency. So that's the day zero stuff. Once these applications are running, you still need to worry about performance management and things like that. So if you are just serving out storage volumes without understanding how they're being consumed by the application, then things start breaking down when your data lake or shared data architecture data res-e-ner-ev is cloned. And you've got multiple applications kind of hitting the same storage volume somewhere on the same physical disk. So now you need to maintain performance isolation not just at the container level, but also through the network, onto the storage level. And so that's where just plugging in storage volumes into Kubernetes is not going to cut it. And, when people want performance isolation in a shared environment, that's where you need that end-to-end integration where the storage needs to understand how those volumes are being used by the application and re-si-li-a, the container layer. And that's what we have really done. - Really, I mean it's, cuz it's not enough just to map a volume to a container from a third party storage array. The third party storage array is not gonna know anything about how the volumes are related to each other, - Exactly - and as we pointed out you're gonna get collisions and cache problems and all that stuff. - Exactly, so there are performance issues and then if you think about in if you think about in the simple example that we were talking about earlier. You take a application like Hadoop. That tree-ver application so that you don't need to do enterprise gate storage in the back end. But if I put all of those copies on the same order, the same rack, you will think you have three copies but you really don't have three - Right - So that's where you can't just blindly say give me volume, and get the volume back and put it into a container. You need to understand what needs to be together, what is to be apart, and that's the application level many first, that needs to be translated all the way down into storage level.
- Right, so if we're familiar with containers, we know there's a container manifest kind of, that tells us what that container is what it's plugged into. One of the things you guys really bring to the table is a higher level construct of a manifest. It's an application that talks about all the pieces that have to go together, have to be monitored together, have to move together, have to have at least connectivity as you create larger and larger environments, which I think is brilliant, right. - Yup. - And in fact you know, ultimately, while we are talking, spending a lot of time talking about Kubernetes and network and storage our vision is to give you an app store experience at the higher level. So, really, Kubernetes and storage, if you are passionate and a power user who wants to know all those things, that's great, but really, we want you to be able to get a single click, deploy, manage, move application and data from one place to another, one clem to cloud, across data centers, or even back in time. And be able to do all of those things at the push of a button. So that's really what we do, and we translate all of those life cycle management, deployment workflows into all the plumbing that needs to be orchestrated and managed, so that whether you are doing this for Oracle, SAP, Hadoop, or Tenser Flow, it all looks the same to you. - Yeah, we were talking about big data applications, and I was like sure, sure, you know there's containers, then you start talking about Oracle and Tier 1 applications and I was like "oh!" You know what, those are coming, they're gonna be containerized, you're gonna have to be able to run them in your data center. You're gonna need absolutely rock solid architecture, you're gonna need the kind of storage that does this and has policies that you can go in and describe by application what the storage policy needs to be for that volume and data, and have it meaningful. I mean I've often said and been writing lately a lot about how IT folks need to graduate from managing infrastructure, administering infrastructure, to becoming policy experts and policy wonks, right, they've gotta climb the ladder a little bit and figure out how to write policy more than they gotta write Unix commands. - Exactly - I know there's a lot of cool stuff under the hood when we look at that, you know there's this idea of spindle performance, there's the idea of the application snapshots, and you mention briefly here, there's a lot of cool stuff on the networking side.
But, sort of more broadly, what do you see as sort of the next, kind of, thing that enterprises are looking for that you guys are bringing to market that other people don't have? What are you sort of, checking the box on? - So we talked to a lot of enterprise folks, particularly folks who are responsible for the data infrastructure. And, there is a huge wave, the DevOps wave is transforming the way software is built, shipped, deployed in enterprises. But a lot of that still continues to be on a stateless side, so data, if you talk to the folks who are managing this big enterprise, mission critical databases installations they want to get sold agility and DevOps benefits, but they are still kind of stuck with bare metal silos. In some cases, they are making a transition to cloud, and cloud native services, but unless you are going to offload everything to a managed service environment you still are stuck with this custom workflows, which lead to the sort of latency in deployment and management. So we talk to those folks who are chartered with modernizing their infrastructure, delivering the agility, there's a lot of pressure on them to bring that self-service experience that is happening in the rest of the organization, under the CIO, and we say okay, here is the solution that nobody else can do today which will bring you that self-service, app store like experience, even for this complex applications which do not, nobody does you know Oracle, or SAP, or Hadoop today on containers or Kubernetes. We'll make that possible, you know, for your deployment. - I mean really, future-proofing this, and if you're sitting in the CIO chair and somebody says, "hey, you've gotta build a private cloud or you've gotta come up with a transition to journey cloud, or something that even gets us to multi-cloud but doesn't lock us in," it sounds like you guys are aiming to be that next generation platform.
- Exactly, so we are the software layer that abstracts away all of those things, so you don't need to change any tooling or workflows when you go from on-premise to cloud or cloud on to Cloud2. And we see that all the time, there are a lot of folks who want that self-service experience on prem today but they know that the cloud move is coming in a year or two. And we also talk to people who are have a conscious multi-cloud strategy because they know they are going to, for resources, they are going to go to one cloud but machine learning they're going to go on another cloud. So that's one flavor. We're also seeing organizations which are committed to one vendor and then there is a exec level change who says, "well, you know, we are competing with these guys versus that, and now it will move somewhere else." So for that reason, pricing reason, people do not want lock in. And, so, when people say multi-cloud, we see less of, you know "split my Hadoop cluster across two clouds." It is more about not wanting lock in, wanting the flexibility so that you don't need to redesign your entire tooling when you go from one cloud to another, and all that, we make that possible. So you really, literally can take a cluster and in a single click, move it to a cloud, From on premise, or from cloud 1 to cloud 2. That's really the value we bring to the table. - Alright, so going back, you know those manifest files we can sort of make the whole application migratable. We're kind of running out of time here, where can someone go to get a little bit more information? I'm sure your website, and what should they do, what should they be looking for if they wanna really kick the tires here? - Yeah, so go to robinsystems.com, we've got a lot of information collected there, there's also a free trial button, click on that, you can get your own sandbox environment. And if you want to deploy it scale, just reach out, the contact us button, and reach out and one of our team members will respond to you and help you set up a more in the system POC on that environment. - Well, thank you for being here today Premal. - Thanks Mike. - And thank you for watching, I know I'm gonna be digging more into how we do performance virtualization and scalability, and capacity management, and all those other good things, on these large hundred thousand plus containerized environments, with all the production storage that goes with it. So stay tuned and we'll be back soon. Bye.