WE CARE ABOUT YOUR PRIVACY
We use our own and third party cookies for analytical purpose and to show you personalized advertising based on an analysis of your browsing habits (for example, visited pages). Click HERE for more information.
You can accept all cookies clicking the “ACCEPT” button or configure/reject them clicking HERE.
ACCEPT Configure/Reject

VergeIO: The Evolution of HCI to Ultra-Converged Architecture (UCI)

UCI offers a lot of flexibility and options when it comes to managing and scaling your IT infrastructure in ways that proprietary architectus like VMware can't. Check out these snippets from a recent visit George Crump, VergeIO's CMO had with us.
  • X
    VergeIO: David Converges On Goliath
    Back...
    VergeIO: David Converges On Goliath

    VergeIO: David Converges On Goliath
    00:30

    George Crump, CMO of Verge.IO talks about the challenge of going head-to-head against VMware not only for them but also in appreciation of the decisions clients need to make about their investment in infrastructure, where they want to go, and the flexi...

    Transcript
    This is not David going after Goliath.
    This is like whatever's below David.
    But you know the ant at David's feet going after Goliath.
    And the reason we're doing it is, frankly, there's a need.
    You know, we're getting customers all the time asking us, you know, is is there an alternative to VMware? Can I do this? And we think, obviously that that there's a there's a chance that you can and, you know, we'll walk through kind of what that looks like today.
  • X
    The "Mainframing" of Open Systems Architecture
    Back...
    The "Mainframing" of Open Systems Architecture

    The "Mainframing" of Open Systems Architecture
    00:36

    George Crump, CMO of VergeIO discusses how VMware created a proprietary layer on top of open systems architecture thus rendering it as a proprietary container.

    Transcript
    If you think about what VMware did at a very high level, what they did is they came in and said, okay, well, what if we just take that whole thing? And wrap it into a single server.
    Right.
    And then these became, if you will, VMs on top of that.
    Right.
    And so that allowed us to, you know, start experiencing, you know, really at least at the open systems level.
    And I know my mainframe guys in the audience will say, I've been doing this for even longer.
    Sure.
    Um, but you know, that allowed us to start to benefit from this.
  • X
    The Achilles Heel of Datacenter Architecture
    Back...
    The Achilles Heel of Datacenter Architecture

    The Achilles Heel of Datacenter Architecture
    02:45

    Transcript
    We still had a fairly complex architecture, right? We had we had our a separate sand infrastructure that had to be purchased, maintained and and that, you know, really is still prevalent today.
    We had, of course, our compute infrastructure that had all the different servers and VMs and all that kind of stuff in there.
    So that that was another area.
    And then of course, we still had our separate network, right? So you still had that classic three tier architecture.
    You know, we've talked about Hyperconvergence in the past.
    This was supposed to solve that.
    And the way I describe Hyperconvergence just real quickly as a tangent, is we what did we do really with Hyperconvergence? You know, I'll just draw a three node HCI solution here.
    Right.
    We basically we took these things right And we essentially software, it ties them.
    Right.
    So.
    Right, right.
    Software defined storage instead of the.
    Became software defined storage.
    We took the network and in theory, we software defined that.
    Although I would I would make a case that most virtual infrastructures really don't do much in the way of software defined networking unless you pay a lot extra.
    And then of course we had the hypervisor and so but you still have three separate components in one box, right? And so the good news is kind of easier to buy.
    The bad news is now you have all this stuff in one box that's still separate, so you don't really gain as much. And
    so I think what we end up with either way, whether it's it's down here with the hyper converged infrastructure up here with the sort of the classic three tier is you end up with this complex infrastructure that has to be sort of upgraded and managed separately.
    There's just a lot of different things going on there.
    And in both cases, you run into probably, I think, the biggest Achilles heel in the data center.
    And really I think why people started to look at the public cloud is scale, right? It doesn't scale small, really good.
    Right.
    And it doesn't certainly doesn't scale large.
    And if you look at people that are going to the cloud, they tend to be very, very small organizations or very, very big organizations.
    Right.
    And and it's scale, right? You can't it's very hard to I think in some cases, especially with HCI solutions, start off with a two node cluster.
    And as you know, either of these environments get into, you know, thousands of VMs and hundreds of nodes, it just becomes overly complex.
    And so that's really the the other big issue is, is the issue of scale.
  • X
    VMware Licensing Costs Are Growing
    Back...
    VMware Licensing Costs Are Growing

    VMware Licensing Costs Are Growing
    00:43

    Transcript
    There's just the straight licensing costs, which is is getting, you know, expensive.
    And what we're seeing anyways is increasing.
    The second is how efficiently you use that hardware, right? If if you look at the power of the the CPUs that are available to us today, the IO capabilities of the media that's available to us today and the capabilities of the network hardware, um, you know, we should be talking about hundreds of VMs per server, right? Most customers won't for a lot of reasons, won't go anywhere close to that.
    And so you run into those sort of issues and then you run into, you know, just just the straight licensing.
  • X
    Maintaining The Hypervisor User Experience
    Back...
    Maintaining The Hypervisor User Experience

    Maintaining The Hypervisor User Experience
    02:08

    There's some confusion about Microsoft's direction with Hyper-V and there's also challenges associated with KVM and OpenStack. At Verge.io, they have developed their own approach using a combination of QEMU and KVM, creating a more streamlined hyper...

    Transcript
    There's a lot of options.
    You named a couple of them Hyper-V.
    You know, I can't figure out what Microsoft is doing with Hyper-V.
    I don't know if Microsoft can.
    So.
    So that's a bit of a problem.
    KVM Great.
    You know, good solid base.
    Lots of lots of different variants of it.
    But, you know, people struggle with it.
    And then OpenStack, which was supposed to save us all, is got to be one of the hardest things in the entire planet to install.
    Right.
    And so I think that, you know, we see a lot of that.
    So of course at at Verge.io,
    we have we have our own take on this.
    We use a combination of QEMU and KVM, but most of the work now has been, you know, we bypass a lot of that as we've matured.
    You know, for efficiency and things like that.
    And so we really almost have our own standalone hypervisor at this point.
    And so that's but that's it, right? And I think if you look at and we'll get into kind of the differences here in a minute, but if you look at where we are with this, right, the if we switch this from VMware to, you know, let's just say anything at this point, the user experience should stay the same, right? Because they're just interfacing with an app.
    It's a virtual machine.
    I think every hypervisor that I know of can run Windows.
    So these are Windows apps. It
    doesn't matter.
    Linux obviously is now very prevalent in many data centers, so that doesn't matter.
    And so the user experience doesn't change much at all.
    Now, there could be some performance changes and we'll talk about what those look like and how they work.
    But there shouldn't be any, you know, if you will, user experience changes now.
    And the other thing we've got to get into is so if I'm going to make the user experience hard, that sounds like all the heavy lifting is going to fall on the shoulders of IT.
    And so we've got to make sure we we fix that.
    And that's that's really where the rubber meets the road, if you will.
  • X
    Ok To Mix: Intel, AMD, GPUs Nodes
    Back...
    Ok To Mix: Intel, AMD, GPUs Nodes

    Ok To Mix: Intel, AMD, GPUs Nodes
    01:37

    Verge.IO is designed to run on existing hardware, abstracting itself from specific chips or processors thus also providing the flexibility to easily mix different types of nodes within the same environment.

    Transcript
    So the first thing is that, you know, more on less hardware, right? Or more on the same hardware depending on the situation.
    So, so unlike many, many solutions in the market.
    Right.
    This is not a I got to also buy a whole new brand new set of hardware.
    We literally will run on your existing hardware.
    The the development team has done an unbelievably good job of abstracting itself from the hardware.
    Like we don't we specifically will write things in the code that yes, they could be available through a certain chip or processor, but we don't want to be locked into that chip or processor.
    So we actually go and do the work of writing the code.
    The second thing we do is we allow you to very, very easily mix different types of nodes within the same environment, right? And so, for example, these could all be Intel servers today.
    And then you could later put in a layer of, you know, I don't know, AMD servers if it made more sense for your environment or the cost was better or, you know, maybe you had an app that was charging by core.
    And I'm not a great CPU wizard, but from what I understand, AMD delivers more performance on less cores, so that could be a reason to do this right? The other thing you could do is you could add a layer of let's say you had an analytics package or something like that, but we support GPUs, so you could have some nodes that are GPU based.
  • X
    VergeIO For Data Center Disaster Recovery
    Back...
    VergeIO For Data Center Disaster Recovery

    VergeIO For Data Center Disaster Recovery
    00:58

    One of Verge.IO's key feature is its migration function, which allows for scheduled and repeatable migrations. Leveraging VMware's change block tracking capabilities, Verge.IO only transfers the changed blocks within a virtual machine (VM) to its clu...

    Transcript
    Probably our number one thing is, is this migration function that I talked about up here, right? You can actually, if you will, schedule it and keep doing it over and over again.
    And we we tap into VMware's change block tracking capabilities.
    So the only thing we're sending to the VergeIO cluster is changed, you know, the change blocks within a VM, right? And so a great place to start is using us as a disaster recovery site.
    Let's face it, it's infrastructure nobody wants.
    Nobody's going to say, okay, yeah, let's reformat all my VMware boxes and throw in these VergeIO guys.
    I mean, I would love for people to do that and it would work great, by the way, but then nobody's going to do that in their right mind.
    Well, this is a great way to sort of the the next step after you you know, you're going to do a proof of concept.
    You're going to make sure it works.
    But then what? Right.
    So this gives you that next step.
  • X
    The Power of Verge.IO for Networking, Storage, and Hypervisor
    Back...
    The Power of Verge.IO for Networking, Storage, and Hypervisor

    The Power of Verge.IO for Networking, Storage, and Hypervisor
    02:35

    In this clip we cover how Verge.IO offers enhanced capabilities at three levels: networking, storage, and hypervisor.

    Transcript
    At every, each of the three levels storage, the hypervisor and networking, you're going to pick up some capability.
    So let's start if you let me let me start with networking first.
    Okay.
    Because I think a lot of people, you know, a lot of people skip that because they've got a proprietary Cisco or whatever infrastructure.
    And and again, if you look at the cost of NSX, that's probably why this this we give you basically everything that's in NSX at no additional charge, right? So like I said, it's full layer two and three.
    You can use commodity switches.
    You can gradually switch over to those.
    So massive cost savings.
    So then now let's talk about storage, which you and I love.
    Right? So at the heart of the system is the entire system is globally deduplicated in line, right? And so and that's built into the operating system.
    So it's not an add on from a storage thing or anything like that.
    It's just a core.
    It's the it's very similar, if you will, to blockchain technology.
    I can't say that like, oh, it's blockchain technology because we actually did it before blockchain was a thing, but it's the same kind of concept.
    And so we know the, if you will, the nodes of every block, right? And so we know how those things work.
    And what that gives us is unbelievably powerful cloning slash snapshot capability.
    So we call it IO Clone.
    And so you can copy an entire volume If that's too petabytes in size in a millisecond or milliseconds, I should say, because it's the same when you make the initial copy.
    We just look at the nodes and go, Oh, those are all the same.
    We'll just reference them again.
    Right? And so but this is a little bit better than snapshots because we don't have this complex tree of, okay, is this changing? Is that changing? Do I got to do a copy on write? Well, those nodes become those blocks or if you will become independent at that point.
    So the value of that is you can take thousands of them, you can retain them indefinitely.
    Now, I said there was no capacity growth.
    If you retain them indefinitely, there will be capacity growth.
    As you change things, you can, you know, restore them basically instantly.
    You can you can repurpose them, right? We have customers that will have like a MySQL environment and deploy the same thing over, over and over again 100 times for different customers and things like that.
    Um, and so, and then the other big thing is IO performance, we tend to see a significant improvement or customers using it tend to see a significant improvement just in raw IO capabilities of the product.
  • X
    Take The Ultimate Extended Test Drive
    Back...
    Take The Ultimate Extended Test Drive

    Take The Ultimate Extended Test Drive
    02:31

    In this clip learn how easy it is to set up and test drive the VergeIO application. You can be cloning virtual data centers in just a few mouse clicks.

    Transcript
    We can set up a test environment for you in a matter of minutes.
    We'll basically, if you will, eat our own dog food, we'll create a virtual data center for you.
    And you know, ten minutes after you make the request, unless it's midnight, we'll we'll have you up and running and you'll be able to log in and start, you know, plucking around with the software.
    And there'll be there's a you know, with the email, there's like an online training guide that'll take if you read it, we'll take you through it and you'll figure it out pretty quick.
    So I mean, within let's just call it an hour or two of making the request, you could have a complete virtual data center with VMs running and things like that. I
    think that's step one.
    Step two, you're at some point you're going to want to do a proof of concept.
    Um, you know, I think a lot of enterprise vendors shy away from that and they try to get you to do try and buys and all these different, hey, we'll set you up with the software, give us some hardware.
    It might take one call to tech support for 15 minutes.
    Usually it's a networking thing and you're off to the races.
    And then like you said.
    Right. I
    think the the the logical next step once the POC ends is, hey, use this for D.R..
    You can I like to think as D.R.
    as the ultimate extended test drive, right? You can run it for six months a year up in day one, you're saving 50% off of your D.R.
    cost.
    You're probably my my belief is you're in a better position from a disaster recovery standpoint.
    And then as time goes on and you, you know, have a new workload come around, maybe a storage refresh, we've got a.
    Uh, let's just say a very, very large oil and gas company who's looking at us because of the storage costs.
    Right.
    And, you know, the and you say, okay, but, you know, there's sort of this we replace VMware thing.
    You know, what if they they're looking at how much money they're going to save on stores.
    They're like, you know, maybe we could do both.
    So that's another area.
    And of course, you know, if you're certainly if you're VMware licenses are getting ready to expire in the next 12 months, six months, 12 months, I would absolutely take a really hard look at this because, again, you're you're looking at a 50 to 80% reduction in cost just in licensing plus better utilization of hardware, better utilization of storage assets, you know, a future of not having to buy proprietary network switches.
    The the ROI on it just keeps adding up, you know, repeatedly.

Full Video

X
Embed
Error playing video: VergeIO: David Converges On Goliath

[NETWORK ERROR OR UNSUPPORTED FORMAT(4)
NO SOURCE(3)]