Hello, I'm Rob Hirschfeld, CEO and co founder of RackN, and your host for the cloud 2030 podcast. Today's episode was about HCI at the edge and we started with Seuss's harvester, HCI integration of Kubernetes and cube vert, and Longhorn their storage system, and some pixie booting magic they threw in there. And we talked about that a little bit and how Kubernetes can fit. But we really morphed into Edge operations, and how hyper converged infrastructure can or can't fit. And what works or what doesn't work that included outposts, included Amazon edge included the cloud to edge migration from an application development perspective. So a lot of fascinating topics throughout the conversation. You will learn a lot and I know you will enjoy it.
Today's topic was inspired by harvester. Although I think we can expand this more generally, especially given the crowd can give a little bit of background on the harvester pieces and think there's a generalized question about do we need it at HCI? Is it useful? harvester is a project by the rancher team at Susa where they have combined their Kubernetes I suspect this is the k three s stuff but it might just be regular Kubernetes and integrated in Longhorn which is there a distributed storage system, I think, based on Ceph under the covers, do vert, which would create VMs. I haven't seen as much Cupar stuff lately. And then, but there's then it's a Kubernetes, cluster all around. And then the thing that makes HCI HCI is the hybridization of mixing your storage infrastructure and your virtualization infrastructure. So the idea being that what you've got here is a mix of servers that have a general general footprint, and then you can use the storage on them to create a storage array. And you can use a compute on them to create a compute array. But it's one it's basically one building block. And that perspective, has anybody actually played with the system and installed harvester
worked really on my end. I, I can kinda see the what they're going for, like it, it feels a lot like they're trying to do OpenStack equivalent on the grid is
there's, there's definitely an element of that because there's a into HCI, OpenStack HCI pieces.
And I think what they, the main thing that they add is there, or at least the appeal that I see from the documentation is that they make it really easy to to add more nodes. Like it's basically a pixie boot.
And I'm not seeing where they see the management server is my experience with this stack, is there's a ton of work that goes into building the node, installing the Linux, putting yourself on these VLANs
there's a link at the bottom. Read the docs. Thank you.
Rob, that was kind of my Saren. Leo's real estate agent. The that was kind of my first question. I actually was thinking about this earlier before the meetup and that no piece is exactly as I talked to people, and as I find out where the challenge and whether value of HCI as it is actually people are not caring as much about how do you deploy it versus how do you you know, kind of day to ops and how do you get out of the business of IT infrastructure and that's kind of the appeal of HCI. So I'm not sure who the audience is for This approach of just seemed to work, like more complexity.
So I suspect, considering this is coming out of rancher that the audience was a customer. And so this is a solution in search of a problem other than the one customer that they came up with a solution for.
I think there's a broader question here about HCI. The relevance of, of edge HCI. And so and then this term in this approach, exactly,
I get the feeling that the people who would be interested in this is those who want a beam of cluster for VMs. Throw away commodity nodes, that they just add the in numbers fashion.
But, I mean, why not use Proxmox, or that or OpenStack or VMware or there's, there's so
the difference I think is with with Proxmox, or VMware or OpenStack. If your node goes away, there is some work that you need to do to redistribute the workload and to remove that node from the cluster. I feel in this case, at least based on what I'm seeing is that the nodes themselves are treated as ephemeral.
I mean, even with Kubernetes, if you lose, if you lose a node or your your, you've got to do pull it out of the system. I mean, the Kubernetes scheduler with cubelet May and I just haven't seen it may reschedule that VM on a on a on a different system, I think I think you'd get that as a resilience. But that's not necessarily a win. Ending on how that system comes back online. And what were the state one?
I guess that's another question for me. I always thought like Cuba was a stock gate versus something if I want to deploy VMs. net new. Cue vert isn't designed, the intent isn't to use cute vert to say I want to deploy net new VM environments, but to kind of help me limp along until I can containerize all my apps or kind of choke out my VM workload load. So I'm, I'm a little bit confused, like the audience. Again, I think I agree, Ronnie did, you know there was probably a customer needed topping and you know what, we ended up with a product, maybe we can sell it to some other? We can, you know, get some other folks to use it and get support from as well.
I would, I would say that that's not the only use case. If you need to process isolation, we need a VM anyway. Being able to manage that from the Kubernetes control plane has its peoples i i don't know if I would do it. But I can see that it has some people's especially when you're not in the cloud. You don't have the depart API's to launch virtual machines independent. There are also use cases where workloads just are not Linux can open by the hole. So you can run it in a container.
There's another component to this, if you're if you are all in on Kubernetes, which I'm assuming a user the solution would be you can cube cuddle your VMs. With Cooper. Yeah. Which if you're looking for a mixed environment where you're doing containers and VMs then this actually would gives you some of that protection if I think the only thing you're doing is VMs. And you're using this as your range VM scheduler. It but Pam scheduling is actually pretty hard.
The question becomes the difference in audience of let's see who's interfacing with Kubernetes versus a virtual machine. The traditional art or thought would be the developers doing the Kubernetes interfacing whereas a traditional sysadmin is doing the VM interface and is Kubernetes or cube cuddle? Really what I want to enter faced with as a VM admin,
I mean going from VM admin to coroner's, I can see the hesitancy. But going from curate Kubernetes, to the management there, there's definitely the appeal of take advantage of features, and just have the drone analogy. I've never used position I, I was supporting both printers and on a standalone, like Docker, Docker compose installations. And did the running point for Docker for me, like going backwards was always loosing the the Koreans probes, like the lightness and brightness and stark probes. I mean, Docker, and Docker, you can configure health checks, but the but the runtime doesn't do anything unless you unless you run swarm. So it will ignore the only health check. So go looking at this and going from Kunis to the VM side, there is an appeal and saying, I want to be able to configure my control plane to restart the VM, if it's not responsive. are in the if you have the curious control plane they're already on, and you get those probes for free? I can see the appeal of trying to do that for VMs. I'm not saying it's it's the right way of doing it. Because something that saying goes radio, once you have a hammer, everything starts to look like a nail. So I don't know if if this is the right approach for VMs. But I can see how someone might have thought that this might be an interesting approach. Right?
Actually, go ahead
it sir. No, I was gonna actually co find that out kind of the way as he was talking, I was thinking about circa 2009 2010. Rough. First I've ever smelled EC two instance. And I said to myself, this is really stupid, like, I don't as a VM, and I don't understand it, I don't know why you would do something this way. And, you know, decade a little over a decade later, look at if you're if you were born in EC to when you came and looked at the way that VMware Red Hat array, or KVM, anyone did them in the enterprise you like this is really dumb. And it seems like it's too heavy of a way to do something what you were born in the cloud to do that with sample. So if you're coming at it from the perspective, as someone who's always worked with containers, and you have this requirement that you need the kernel isolation and you need a VM, it doesn't make any sense to change your operating model to use in the control plane to use something other than Kubernetes. So I can I can, I can buy that argument that if you're born and Kubernetes, then VMware makes no sense to you. Or even OpenStack, to some extent, makes no sense to you, you should you know, I just need this additional requirement of it being isolated, a kernel.
In a sense, this goes back to the previous discussions we had about complexity. I mean, yes, Kubernetes adds more technical complexity, but it reduces the process complexity, or the maintenance complexity.
But it does things for you. Right? That's that makes sense to me. And I think that I don't know, the more I'm looking at this, the more interesting it becomes in that, right, because the other thing about this is including a storage back end. So when you create a VM, you're automatically putting it into a storage backed system. So you can do migration. You can persist, you can persist it even if the machine gets shut down. There's like there's my you know, that is the whole appeal with HCI is that you're Oh, you're you're getting the storage backing for the VMs as part of the environment, which then reduces the complexity increases the system complexity, but it reduces the VM use complex.
I mean, that's important. Also, the downsides. Historically ACI has had some very no notable bumps in performance scaling. By when it comes to storage. But, yeah, so it's not a general workload
solution. Yeah, that's pretty. We've been doing some research on this topic specifically. And we're walking away thinking HCI, the appeal of HCI is that it simplifies the consumption. And to some extent, the design underlay. But the the time, like tying compute to storage isn't what people want. What people want is the improved workflow of provisioning and management of the system. Whether that's whether that comes with, you know, converged, storage and compute is irrelevant as to making sure that I have the experience that I want to have. So you know, we get into this purist argument that a solution, you know, when we get look at the D HCI solutions versus the HCI solutions, that the ACI folks argue with the ACI is an HCI. I don't think people really care as much as HCI, folks make it out and make them out to care.
I agree, I agree with that. Actually, in some ways, if you are administering a system, it's nice if storage just builds itself, and you're never worrying about it. But you know, if always having a separate storage array that just a storage array stuff is easy. Yeah,
it's very much easy, where we, you know, we built our lab based on V San. And we don't regret doing that, because we wanted to mimic the decisions of five, six years ago, and this is what someone five, six years ago would have done. But we definitely feel the pain, operationally of saying, Okay, we need to apply firmware to the system. And managing the fact that compute and storage are married together is a pain in the bud without some type of overarching system to handle it for me, like the appeal of the D ACI And HCI systems proper, is that I can give the, you know, I can kind of feed it a firmware update. And then it orchestrates everything that needs to happen for that firmware to be applied. When I have to replace that with Ipex. And people to do that. It is a huge burden. And it kind of eliminates all the value. We call it neck HCI. It eliminates all the cost savings and value of converging storage and compute.
Oh, So Keith, and another thing? I mean, storage and compute we've been I mean, we've been down this road before, right? You know, the hyper converged infrastructures did that. And it seems to me that the it's a it's also a skill set thing, which is that most people who are storage savvy are not necessarily compute savvy and vice versa.
That's very much true. And that's your echoing kind of like what we're finding. In general, what are the story that I'm trying to tell is that I need to move away unless I unless I have the scale that any of this matters, then I shouldn't be mucking around at these lower levels at all. Because I need my attention needs to go elsewhere. Like this edge problem that we're talking about in general. I don't want to be in the minutiae of solving the compute level hardware design, unless I actually have to. And today I do have to be in that minutia. But the goal is to get out of it a lie on folks like Rob, to solve though that lower level problem for me, so that I can focus my my resources towards the higher abstraction problems.
Yeah, no, I agree. I agree. I mean, that's the whole that's the whole thing of disaggregate hardware and software disaggregation in general, right. See, you want to make hardware stuff go away, so you don't have to deal with it.
Yeah, but the reality is, is that it's not I think that's where we get, you know, like three years ago to you asked me what, what should we be doing? I'm like, oh, focus 100% on the abstraction and let the hardware providers figure out what the hardware is just simply hasn't happened?
No, it hasn't. I agree.
No, they don't they don't have a lot of incentive. I mean, no. And I think breeding it like a separate layer is actually become a little bit from our perspective is, is if you treat it like an isolated layer, it actually becomes a little bit more problematic. And you're adding, you're making the system more complex by by shunting it. And if when I look at their, their, their, their bare metal pieces, it's pretty minimal and
very hard wired.
Maybe that doesn't surprise me proficient it because
it's hardware engineers are designing this stuff.
Oh, not just that, as hardware engineers designing this stuff is hardware engineers. I've never done operations.
Oh, that's That's very true.
That yeah, that becomes the gap that I see on my, when I talk to I sit my business six between the marketing and product teams in most cases. And there's such a huge disconnect between kind of how they view the value of these things and how customers use which is, you know, the story of every product in history.
Yeah, yeah. Yeah.
I mean, I do, I do see the appeal from a, you know, an edge to me implies that it's not a data center environment, you don't have administrators on site, probably available by remote. That there's there is definitely the the fraction of a drop in self contained environment. And that that, to me is a consistent fruit from an edge.
Yeah, I can't agree more with that, like the, you know, if you think about a retail location, you know, simplest form factor, you know, I can send an Intel milk with three, three connections, or three Intel notes with three connections. And I have a cluster, if one of those things fall, I have the management fail, I have the management system to identify which one and I can just send a nother one or heavy hot spare on site. And there's no need to send it person that is, you know, Nirvana. But it just the problem is he gets way more complex in it.
That Well, this is this has been a head scratcher for me from that perspective, because the Knicks have very minimal management capabilities. They're not really, you know, hardened or protected from that they don't even really have multiple, I think they do have to nix but you know, that it building it in that and there was a gale computing, I think, which is a Nutanix competitor, they do hyper converged infrastructure at store scale virtualization platforms that they were trying to do and they were doing with Nix, and then they would like daisy chain them together. Like that it struck me is, at some point, the complexity of that, just put a better box in place. I guess there's redundancy in Linux, but you're gonna have to build a whole bunch of stuff like distributed storage plane to guarantee that that stuff works.
And I think that's what people run into, from a practical perspective, rather than, you know, paper, they'll see something like, this isn't like, oh, I can do this with look. Yeah. And then they run into all the management problems that we've solved the data center two decades ago, with hardware providers putting IPM and all this other value add that drives up the cost of the hardware, and we see why the hardware costs so much money here.
Yeah. Huh, this is this is exactly between the dream and the vision and, and what's what's practical from a drop in perspective?
What's, what's a good it footprint for an edge? Like? Would we say that there's a minimum it footprint from an edge like that we could think about as a minimum it footprint. For this question,
Rob, I could do I mean, I have that experience. So and that's what the, the VNS product is all about. I mean, we have a minimum footprint. It's a it's an x86 box. I think the smallest one we sell is got a course. It used to be we used to have us or core, but that was kind of useless. So it's not the but and it and it's it's a single box. It's standard. I mean, it's Dell or lanner vantec, or, you know, it's white, just wait, wait labeled off the shelf boxes. And it's got, I forget how much memory, I think it's 16 is the minimum I can I can look it up, but it's, you know, it's pretty small footprint. And we find typically our customers by two of them. I mean, the off the shelf price of this stuff, this thing's runs about 600 bucks. So they're not super pricey. So, you know, if we put two of them for failover purposes, Keith, that relates to your earlier comments about maintenance, you know, at the edge. The idea is, if one fails, you know, we ship another one out, or we send a tack and you know, it's not, it's not a senior level tech who goes out and swaps a box out. And, you know, that for, for retail for gas stations for quickie markets for for that kind of stuff that's
in that case, running running a VM, like running their own workloads on those units?
Um, we do we do have that VM, that those boxes are a little larger, because they have caught Remember, we're running our network stuff on them as well. So the the ones with the network plus the workload, I think the minimum box is a 24. Core box. So it's definitely bigger. But again, I think the retail price of the bottom end application edge box is probably about 2500. Sound,
and that's that's actually not, you know, people are finding the cost isn't in. Yeah, the real cost isn't in hardware, like now. It's not. It's in this this layer that this is hoping to either solve or add to it. is, you know, you know, Rob, we've had the conversation in the past, like, is Kubernetes, the right platform for the edge? And it's kind of like, well, that's the platform we have. So that's what I'm getting? Yeah. And the the like the competition there, like the real world competition, if I was advising the customer on the edge solution today, the real world competition would either be something like this, because that's what we have. Or looking towards Amazon with the One. The One, one rack of the stuff.
That one rack stuff is not cheap.
No, it's not. I looked at my one of my
posts, but even Yeah, post is very, and Azure has a version to the Azure edge.
Yeah, but they run like 750k to start.
Well, no, that's the that's the full rack when the you can get a one or four. Yeah, like we have just one user on one server now based on that was
based on Dell, but
yeah, definitely like 26, that's only 26 grand over three years. And I'm getting the AWS control plane out to the edge, which you know, that value of the control plane is super valuable. But, you know, again, an unknown operating model. I don't know if like, if I'm a retail shop, and I need 700 Of those, or 1000 of those. I don't know if AWS has to scale logistically not and it's funny that I'm acting the logistics question about Amazon. But I don't know if they can support me in the way that I need to be some
Well, it's interesting that you say that because Verizon can because we do that all the time, right? We have customers that we deploy 6000 widgets out to customer sites, you know, regularly 10,000 We don't care.
And that takes Loomis is kind of, you know, throwing AWS has numbers back in their face. It takes 10 years to get 10 years of that experience.
We've been doing it What for 40 years.
Now I've made the I've made the offhand comment. I can see Amazon AWS buying someone who does this, because you there's no shortcut to getting that experience scale. doesn't get you skill doesn't magically get, you know, they can sell 1000 of these and you know, in a month, but that doesn't magically get you to 1020 30 years of experience of supporting this model. And I, well, that's where I this is where I question AWS is ability to execute
my question their commitment to it, they I had this happen a few years ago, I had a very weird conversation with their network. Company, because, you know, obviously, they buy network services, from Verizon, among others. And so I was having a conversation with them. And they basically said, Oh, we'd like to become peer, you know, telco peer, no, like, you know, Verizon, and at&t and LUMOs, you know, our peers, right? Which means that, you know, network traffic goes over each other's networks. And, you know, it's kind of a, it's a share model, right? The NFL has been around for years. And most people don't talk about it. I don't think the telcos really talk about it outside of the industry. So, so Amazon was coming to us and saying, Oh, hi, we'd like to be appear, you know, and with you, you know, so that their entire motivation is so that we give them their network services for free. i And I explained to them I said, Okay, you want to become a regulated telco? Fine, have at it, but I don't think
this what people forget about this. Yeah. From from, from getting the getting the pieces and parts in? No, it's, and it's regulated, because it has to be able
to scale like that. Exactly. Right. Yeah.
That would be interesting to have a Amazon as a peer with the same liability as US East one. That would be, that would be interesting.
It would be interesting, but the answer is they wouldn't do it. Yeah. Because they don't want to be a telco, you know,
well, they don't want to be a telco. And it's funny the, the sense that I get from, you know, kind of going back to this edge, and Kubernetes at the edge and HCI on Kubernetes at the edge, goes back to the practical conversation that customers are having and challenges that customers are having, they're having these data center like challenges out to the edge, but not resources to build, manage, to architect to design, manage, and deploy these solutions. And there's a lot of opportunity for the rises of the world. There's a lot of opportunity for MSPs at the world of the world. But I don't know if the going back to this conversation about this HCI solution. I don't know if the software expertise is there. And tied to the operational expertise to put HCI specifically HCI. At the age in which, you know, one site might have three boxes, and another site might have 15 boxes. That's, you know, that's that's that's top tough operational you.
Yeah, no, it is not, because as far as I can tell, the telcos are probably the only ones that really have that expertise. I might mark argue some of the big outsourcing outfits probably have at least some of it. Because they they manage the farflung empires have their enterprise customers. But
that's about it. But even even the telcos are reaching into the interior. Right. This is this is a platform operations challenge in the in the middle of it.
Right. Yeah, we don't generally get into the customer's land. You're right.
Oh, I mean, this is so there's an operational concern here about building the building and maintaining the cluster setting, you know, setting up the VLANs the networking, you know, Bill building the site up.
Think? Yeah, some of the motivation is also just distributing content, particularly data heavy content. I mean, do we have things like the like the bandwidth Alliance or the Cloudflare in the cloud providers were covered provides the peering to access the data faster. So what did they essentially do do the caching for for the cloud providers? But I can see that that quadros themselves would likely be motivated to want to try to address This at the network level themselves, because it breaks control control over it. Because
yeah, I've always struggled with the model, even in a data center. I've always struggled with the outpost model in the data center, because there's this the it's that land part of it is that part of the of the stack where AWS DMARC aware, AWS, and then the customer takes over. And as it I think onesie twosie is fine. And for the largest customers, probably not that big of a deal. But once you start looking at the one you and two, you stack options as you go down market, and that stuff just isn't as nearly as rock solid as the stuff when I'm, you know, selling a $750,000 outpost that's going to be a different engagement to a $26,000. outpost.
Oh, my goodness, at that and even thought about that. Yeah,
that engagement is just ridiculous. Yeah.
And also the cost. The the challenge that, and this is an interesting component of what they're showing with harvester, which is the all open source or the imply do it yourself for minimal, minimal licensing pieces with this is, I don't know, you know, you start replicating something across 100 sites. And it's going to take operational, you know, there's a lot of enterprise work, do in building that type of system, like we've talked before. I'm not sure if you're aware, people start to get cheap on doing that work.
So I guess,
this is Simon, sorry. It might be Amazon, Amazon is trying to get third party vendors to implement solutions for customers. And so they're trying to get customers familiar with AWS services, to rely on the fact that Amazon will be there. They're trying to reach through the carriers to the end customer, and some of the carriers will develop and resell those services to. So ultimately, I agree with the concern. But just in the same way the Amazon Prime truck pulls up to your house is actually owned by Amazon, probably or driven by them. It's a small third party contractor. They're trying to get the last mile of these things implemented by other people.
Yeah, well, they have to sign their their cost, their costing model doesn't work with that, unless they outsource that piece of it. I don't have to tell you that third, that last mile is always the problem.
Yes. And, and and the carriers have got the relationships with the customer. And they're strong incentivized to implement cool services. So
right, which is, which is why we brought out application edge. That was exactly what it was addressed. It was like, it came from like a million customers asking us well, hey, you have this box out on our sites? Could can we put some of our applications on it?
Yeah, so this funny, for 1520 years ago, I'm like, the these boxes are appear peering in my data in my edge, you know, why am I deploying V? Why am I deploying VMware on top of that?
Right. And the carriers are, I mean, they have huge advantages. They run scale. They used to dealing with stuff that isn't let fail. They are very good at serving business customers. And so they they are the ideal folks to increasingly taking take over running your stuff at the edge.
And only problem is we know nothing about applications. Yes. Any conversations,
I guess, part about all of this is that you find you know, somebody with deep enough pockets that can you know, do the design and roll out on top of something like this. It becomes really compelling and they're there But I think there are customers for that will pay, you know, a higher, you know, Amazon type money for solution because it's a solution is actually a solution to the problem versus customers needing to hire the expertise internally and kind of figure it out themselves. That's true. But it takes a lot of money to, you know, I don't tell anybody on this takes a lot of money to build something like that. A lot of money and a lot of talent that, frankly, is kind of hard to find.
I yeah, I can tell you about a lot of money because we built the network piece of it. Yeah. And, and it was, you know, it was a hard birth. Let's put it that way.
Yeah, once you're there, it's fine. It's just, you know, getting there's Is this a task?
Well, the other thing is we we were pioneers in this and, you know, we're working we were working with, you know, obviously Verizon, it's not a software company, particularly. So, you know, we were working with partners, and we had to explain to the part, just what we were doing. It was like a totally new concept for them. What one company actually, the CTO later told me, because we we did, we wrote up like a document of the specs we were looking for. And we shared it with a number of companies, when we were in the process of doing that, choosing which partners to work with. And one, the CTO of one company that we didn't happen to chosen, admitted later, a couple years later that took the spec that I had wrote in, and literally turned it into his next product. Yeah,
this is, from my customer days, this is a problem that I absolutely wanted my tail call to solve for me. But the it wasn't the maturity wasn't there yet. So it's a I'm glad to see that. Every every I know every hardware vendor I talked to give exactly this same use case for 5g infrastructure and you talk to VMware, they're very open that there seems to be a revenue sharing opportunity between telcos and VMware that not too different, not with a design not too different than what we're looking at now with this harvester, you know, you take you take vSphere and insert some ipus. Or, Yes, my views, and it's a very interesting scenario. But again, services and application on top is the killer part that is hard is hard. Right.
Right. Well, and we, we ran into this with application edge, when we originally launched it, it was only available for containerized applications. So based on Kubernetes. And you know what, most of our enterprise customers, they have containerized applications yet.
So really interesting mismatch. And that's something from a technical perspective, you think, okay, that makes that's the perfect platform makes perfect sense. That's what everyone is talking about. But Verizon, and telco providers in general, they're, they're not talking to those folks that building those applications typically, which is the problem that, you know, all enterprise IT folks have is that they have really interesting solutions. But they're not talking to the folks that's building these applications, mainly because they can't know folks typically can't afford.
That's an interesting comment. Net. Well, mostly, I mean, obviously, we're selling this to the enterprise, we are talking about the size, the size are actually more interested, because many of them are building new applications in that are containerized. Now, to be honest, we've added the capability of supporting VMs as well. So that kind of skirts around the issue, but, but obviously, VMs are heavier. So you need a bigger box to support support them.
Yeah, my experience in the enterprise has been that if someone's building something on containers, they typically don't have. It's not that they can't afford it is that a man Rob has talked about this, they haven't budgeted it for operations, because they don't understand on Route. So
that was the point I was gonna make
is once you get to this part, where you say, oh, I need these pieces. There's no money. You didn't ask for money for this.
Well, that's because then the developers don't know diddly squat about
this. This is the actual appeal of the outposts or the Azure edge solution. And one of the reasons why I think there's a premium for them is that the developers aren't developing it for edge. They're developing it for cloud, regardless. Yeah, yeah. And ago, that's been one of my edge points for a long time. And, and so from that perspective, whatever, you know, developers are used to using is going to need to be the operational footprint from an edge perspective. That means Kubernetes, it's going to mean containers and VMs for games were predate that. And it's going to have to have some type of cloud like operational pattern. Yeah, I
agree
to work. That's right. Now Amazon is relying on you also wanting to go higher up the stack, and use familiar things from us. Right. So in outposts, the whole idea is Oh, it's just Amazon.
Yeah, well, yeah. It's interesting. You say that, because outpost is not the first time a company has packaged up a rack of stuff. EMC has done it several times. I know Dell did it, at least at some point, HP? You know, it's a very it's to the marketing product team. Sounds like a great idea. Oh, well, yeah.
You're right. I think you're absolutely right.
The reality is that has not none of these products have gone well mark. None of them live. So
except for HCI. Surprisingly, like, for whatever reason, HCI has actually built Dell VX rack VX rail is super popular. So you know, there's, there's, there's a story in there somewhere.
I think part of that is sales pressure from from the Dell side. And I don't when a couple years back when, again, for us positions I was at that time, we were looking at expanding our rack infrastructure by a couple of more storage arrays on Dell was pushing very, very hard towards a CI not waiting from there.
Yeah, it's pretty funny. The I'm sure I'm not disclosing any NDA information, because I heard this, ah, Vietnam NDA channel. But circa that time, they were incentivizing the Salesforce to sail VX rail heavily. And I think that's main that was mainly to compete against Nutanix. And then they flipped it, because the one thing that's more profitable than selling commodity servers, as enterprise storage, is actually selling enterprise storage.
Yeah, yeah.
enterprise storage, than it is to sell HCI that that that switch gets flipped at some point in the past year or two.
I mean, I mean, stories, you think it's been the same by the, the storage API students a lot. Um, this also speaking of API's, I totally agree with but I think battery was used, like the purpose of, of these outputs is that it's not bringing the edge closer, the cloud is bringing them closer to the edge. And it's about bringing the cloud API's close to the edge. There was a recent article I was covering is itself in that tour. There the argument was that Terminus isn't about containers. It's about the APR. And something that hits close, hopefully, we'll do what I was arguing for already. Is that criticism opinionated way of doing things. And so it's the cloud that I'm bringing that to the edge, like bringing the consistent API for, for creating a VM for creating storage buckets. It has quite an appeal.
That that in some ways is the topic that I outlined for two weeks from now, which is this is Infrastructure as Code collaboration. These is what what you're getting at, which is how do I define by my operations in a way that is repeatable? So I'm not dealing with these snowflakes? We are over time by the way. So
yeah, drop, super appreciate
The conversation this was fascinating and went in exciting and interesting directions that I didn't didn't anticipate. So I appreciate everybody's voice and input. But
what a great conversation, we really covered the operational challenges for edge in a very direct way. And it's one that I know we are going to keep coming back to because solving DevOps at the edge is a core part of our mission. So please join us at the 23rd cloud, be part of the conversation, and we want to hear your voice here, too. Thanks. Thank you for listening to the cloud 2030 podcast. It is sponsored by rack in where we are really working to build a community of people who are using and thinking about infrastructure differently. Because that's what rack end does. We write software that helps put operators back in control of distributed infrastructure, really thinking about how things should be run, and building software that makes that possible. If this is interesting to you, please try out the software. We would love to get your opinion and hear how you think this could transform infrastructure more broadly, or just keep enjoying the podcast and coming to the discussions and laying out your thoughts and how you see the future unfolding. It's all part of building a better infrastructure operations community.