20250206 Container Driven Architecture

    6:04PM Mar 30, 2025

    Speakers:

    Rob Hirschfeld

    Claus Strommer

    Keywords:

    Container Driven Architecture

    virtualization platforms

    Kubernetes

    OpenShift

    Broadcom acquisition

    VMware

    container platforms

    abstraction

    delivery model

    GPU workloads

    hybrid containerization

    heterogeneous hardware

    power consumption

    ESG

    infrastructure operations.

    Rob, hello. I'm Rob Hirschfeld, CEO and co founder of RackN and your host for the cloud 2030 podcast. In this episode, we continue our dive into the changing architecture of it, infrastructure, looking at how containers and container platforms are changing. The fundamental nature of what people want to buy, accelerated by VMware Broadcom, making virtualization platforms much less attractive, and the shifting landscape here. This is work that is based on a presentation that I've been giving around the shift towards open shift virtualization and Kubernetes in general. And some of the slides and graphics are from that. If you want to see that presentation as prep, we discuss the industry nuances that surround my assumptions in that presentation. So a really fascinating discussion that I know you will enjoy.

    This is, this is the premise that we're working on, based on industry data and a whole bunch of pieces, and it's very related to VMware retreat. But I think this is, this is true even even without the even without Broadcom, I think that this was, this was happening. It just wasn't happening quite as fast, and so what, what. And there's a subtlety here that I want to distinguish, because we think this is really important inside of inside of RackN, the shift, there's two shifts going on here at different rates. The first shift is away from virtualization, first platforms. So this is saying I don't want VMware, but it's also saying I don't need a I don't I don't want to just go straight to another virtualization platform, right? What? What? What they're saying is, is, I don't think I need a virtualization platform as the core keystone of my it designs, what, what, what their whoops, I didn't mean to jump forward. What? What we see there's an appetite for since they're already in a huge migration towards containers and container platforms, those don't require virtualization. They're easier with virtualization, but they don't require it, right? Their their infrastructure agnostic from that perspective. So the idea is we're already shifting workloads dramatically into virtualization or into containers. What? What I see happening as a consequence, though, with the Broadcom pieces, is that we're going to shift the virtualization workloads from VMware or VMware or alternatives into Kubernetes, virtualization. Much, much, much faster Go ahead, cost.

    So you're saying that, or maybe you're not saying it, but what I'm reading here between the lines, then, is that virtualization was never about virtualization itself. It was about abstraction. Yes, and containers provide sufficient enough abstraction that you can start running on bare metal. Um,

    yes, I mean, there's a delivery I mean this, this goes back to our ISV conversations, right, and how we're delivering software, right? I actually am not as worried about the abstraction. I'm worried about the delivery model, if, if I am, if you know, to your 10 years ago, you actually 2030, years ago, you would, you would deliver things as tars and RPMs and right people would have to run the install themselves. We got to a point with virtualization where you could show up with VMs and run the, you know, that was your delivery model for software, appliance, an appliance, exactly. And we've, we've moved to a container. Is now my delivery model. So it's not exactly like I don't need VMs anymore, right? I mean, we definitely need operating systems, but the if I'm delivering everything in containers, then I don't, I don't, I'm not requiring VMs. If I'm not requiring VMs, then I'm not requiring a virtualization platform anymore. I have reduced, I've eliminated that requirement from a workload. I'm trying to work tops down more than bottoms up. For this. I think there's benefits on both, both ways. But

    so I have a question. Yeah, you're talking about container. As VMs and your your print says ready for containerized delivery accelerated by GPU, hungry workloads. But what I don't see here is hybrid containerization, because it's not all going to be accelerated by GPU. It could be any kind. It could be CPU, it could be memory, it could be a number of like. It's going to be a mixed metaphor, whichever way you look at it.

    So,

    yeah, this is so I think what you're getting to is what I see as, as this story, oh, maybe, I think, I think that we're going to see, yes, everything packaged in containers,

    and they could be of a each container can be different. I mean, you're moving towards agents, but each container different each type of container it could be. Can you mix and match Kubernetes and Docker?

    I mean, the container, the type of container, would be relatively neutral. From it is pretty neutral. From that perspective. It's more a matter. And this is the thing that I think is very interesting, is you could actually mix the hardware underneath your your your Kubernetes infrastructure, which people are getting excited about. So it's like, oh, wait a second. I need high memory for this stuff. I can buy machines with high memory. I need GPUs. I need fast storage. I need and so in VM clusters today, on every vendor type, are incredibly homogeneous workloads, and so you have to buy VMs that match the highest capacity, and you have to over provision, because they don't over allocate RAM. So Right? It's, it's this sort of mess with you, with expensive, high administrative there's a high overhead to using VMs. There's a high overhead and and there's a high capacity buffer requirement. You don't, you don't actually have to have the same capacity buffer that you would have with with a with the containerized workloads or or Kubernetes workloads. So meaning, if I'm buying VMs, when, you know, when we watch this with customers, they buy VM clusters of a specific size and shape over and over and over again, right? Whether they need all the capacity on day one or not, they buy that capacity and then they ramp to use that capacity, and they never, they're never going to get to get to 100% utilization on those boxes, because it just doesn't. It's not a good model. This whole idea of virtual machines allowing you to over provision a gear and get more than the 100% is silly. Nobody runs that. Yeah. If what is, what do you see or what do you infer to be the driving force of the decision to do it that way. And what is the change in change in either decision making or who makes the decision, or what have you says I will be I'm now more willing to go to a situation in which I'm getting containers I'm buying. This is more precision deployment of of i The difference between a one size, you know, buy a bunch of the you know, the same thing, and then just, you know, utilize it as best I can, versus precision, almost, almost, not quite custom, but precision, or more precision is the way it way it's delivered. Am I making sense? There you are making sense, although I don't think the decision, the reason I'm chuckling is I don't think the decision is as elegant as you were describing. It would be wonderful if they actually made, if, if they were making decisions to target the needs more. Yeah, it's the VM. The VM decisions, the VMware decisions, are made because it's a standardized architecture, and they're following a pattern. People, you can, you can find ops people who know how to, how to use them, VM, right? VMware, and the OEMs will sell you a footprint, right? And right? It's not just VMware. It's the virtualization vendors with this virtualization platform mentality, right? They know that there's a there's a networking capacity, a storage capacity, there's right, and those are, those are set in in Stan. Standardized form factors, and they were purchased in standardized form factors. And we've had 10 years of sort of locking in this architecture. And you know, I was doing VMware in the early days before the San sort of storage network architecture emerged, and there was a lot more variation in how VMs were set up. And over the last two decades, right? We've basically gotten into a point where this is what you buy for VMs. And my question to you is, what you know? What was the, what was the motivation on all sides to go that route from the vendor side, I certainly understand it. It's, you know, I build, I build less, less variation, or less variability in what, what gets delivered and on the and then the question is, on the customer side, is it because the complexity led to too many, too much time to implement correctly. It was too difficult to operate and manage. Was it too difficult to find people who understood how to tune the more the more dynamic VM it's in my experience in orgs, it's actually related to the way the companies are buying infrastructure, and the cloud made this even worse, by The way, because the development teams that consume infrastructure, they they have, they, they basically want to have a very, you know, like, I just need give me a VM, give me an API, consume. They don't spend a lot of time worrying about how much they're consuming. They don't spend a lot of time optimizing for their infrastructure. They don't right there. So they're not. They're not engaged in doing that incentive. They're not that's, you know, the whole point is get that out of my way. I got something that correct wants me to deliver. And so the the VM, the teams that are doing the VM stuff, they get a budget. They know how much VM costs. They have a lot of incentive to optimize that mix, which they've done. One of the things that VMware did that's driving people nuts is they changed their licensing model to minimum number of cores, and now people who had bought machines with low core counts to optimize their licenses are paying more. This is one of the broad contracts, right? You're paying more because you're the minimum number of cores is 16, and you have an older cluster. They just renewed you 16 cores, or 30 cores, or whatever, when you only had eight. And so your price doubled even though you're running the exact same stuff on the same so no, there's, there's a whole bunch of stuff where the that the VM team was, was very incentive to do that work. In some cases, their clusters are captive. So, like, there's a VDI cluster, and so they do have a, like, you buy the VDI app, and this is what your cluster looks like. But it's, it's, I wish it was technical choices that went into this and how things were going. It's no, I'm, and I'm, I'm, I'm, basically, I'm setting you up with those questions to basically say they are not technical they are these are not technology decisions. They tend to be skill based, to be budget based financial on both sides, right? People, people were buying this block right here. And then one of the things on the bottom that's really interesting on this slide is there are things they didn't care about in in the 2020 in the 20 teens, right? They, they, they had a lot of DC power, a lot of DC space. They had a lot of power, right? They had actually a fair bit of budget. And so for that buying decision, having this tightly integrated architecture was really attractive,

    sure, but here's my question, and on your previous slide, and on this slide and the one before it even

    that's not this one. I think you're thinking this.

    No, actually, it was the other one with the chip.

    This one. Okay,

    this is more of a this is more of a RackN. This is more of the RackN shill slide, but

    that's fine. All of your slides are and I want to make sure that it's not a fixed in stone direction. Okay, you talk about workloads and you talk about transition to GPU workloads. Yeah. Now call this maybe, you know my parochial view of the world, but there's a lot of other CPU and specialized chips that are coming out. They're being designed, whether it's for AI or not for AI. And yet everything that I'm seeing on your slides is towards GPU. That's not going to be the case. You're going to have much more of a mixed bag, because A, availability, B, cost, C, architecture, D, footprint, blather, blather, and blather, and I don't see you accounting for any of that, and I don't mean it as a poke, I mean it as a heads up, how are you going to deal with this?

    Yeah, I think from the enterprise design questions, I think those are real, but the heterogeneity questions that you're describing are, they're not as much of a strategic concern as from what I've seen from a customer perspective, like so. So this is a this deck ends up being very focused on. I was buying virtualization clusters where the things that you're describing weren't a factor either towards I need to replace my virtualization clusters and and part of the the story is, you know, if I can get away from the virtualization piece, then I can be much more there's, it's, it's in here somewhere. I can actually be much more flexible to have heterogeneous here it is. No, not there. It's you are you are no longer constrained. And I think that the opportunity here is for enterprises to be much more heterogeneous and bring in new types of architecture. That's the opportunity. You're right. I'm not calling it out as specifically, the scaled out piece is sort of a different you know, I'm trying to make a different point.

    No, no, I understand your point

    is valid, that heterogeneity should go up, right the but what it may also, what it may also mean, Rob, is that from the tightly integrated and very homogeneous, you're going to have to Joanne's point much more. I'll call it precision, or there's more. There's more. It's somewhere between, you know, completely homogeneous and bespoke. There are, there are, there are now classes of or vectors that you can follow that says to Joanne's point, yeah, there's scaled out stuff for the for the AI, but there's different kind of mix and with a different set of of performance metrics that need to be delivered, depending on, you know, what the hardware environment is, is It lights out? Is it edge? This is, actually, this is what you're you're getting to a point that I think is terrifying to the enterprise buyers. They, they have, you know, they are very used to a, you know, three choice matrix, you know, the three cell matrix, not even, right? And it's sort of like, it's like, do I need this footprint, this foot per this footprint from one of these three vendors? It's a nine by nine grid, and that's all they got. What you're describing and, and might not you there is who's making that decision? Is that a CFO, is that as CIO, you know? And probably

    the CTO, it's,

    I would tell you, I think those decisions are being made by application teams and workload teams. I think, I think if you as the CTO, they have very little desire to have a lot of heterogeneity in their environment. I know for like, yeah, okay,

    let me interrupt you there. I don't necessarily agree with that statement, because I'm hearing a lot more coming out about, well, you know, talk to me about risk five, talk to me about cancer, talk to me about X, Y and Z and the creation of call it something between the CPU and the GPU. That is, as yet unnamed, that a lot of FPGA players are in the process of making, and that's where I go. Okay, that's risk five, or that's tensor, but that homogeneity is allowing them, it's giving them the mode. Innovation to become purposeful. So the choice between three layer, a nine, you know, a nine, nine cell matrix and bespoke, fully custom, is the trajectory of, oh, I can sub in some parts that are actually purposeful to what I need, and that is a direct to consumer play being embedded in the architectural premise for the future so I can buy one or two pieces and I'm good to go.

    No, I love this idea. I think that question can deliver on that. It's the ops, right? I mean, this is the ops teams are already in this, in this tear. This is part of what, what we're I'm trying to do with RackN, right? We're trying to talk to these ops teams, and it's not. The problem is they don't have any control. So they're going to, you know, if the CTO says, hey, I want to put risks five in. And one of the challenges is the CTO shows up. It's like, I'm running a risk five pilot. This looks great. I want to do it. They show up for the ops team and says, All right, I want to now buy a whole bunch of risk five servers and put them in our data centers. Or, you know, the ops team is like, I don't know how to boot them, I don't know provision them. I don't know how to secure them. I don't know how to all this stuff, right, right? And so, you know, they're in, they're in a very serious quite this is why the lack of reference architecture is a problem. The I don't, I don't think that means that they're in a skills. There isn't. There's a skills there's a skills issue. There's a skills that we'll call it a deficit, and there's also a tools deficit, because I don't you, you're not going to be able to make the ops team capable of doing the job that the CTO and everybody else from above has mandated without the combination. And the question I guess I have is, who or what actually delivers on at least the tooling that allows the kind of access to this new container based delivery mechanism and ongoing operational approach And go ahead class. Were you going

    to Well, I also want to point out that, so when we're talking about homogeneity, there's two things that we can talk about. One is platform homogeneity, by say, Intel risk five, then there's also vendor home and community and again, like it's someone coming from a VMware stack where you're, where you were likely be, would be, have been locked into Dell servers, and just specific lines of those, going to bare metal, going to containers, or at least stepping away from the VMware stack, gives you the flexibility to move sideways within the same platform. But then, but yes, we still like the side effect is also that it enables companies to experiment with other stacks as well, like we got risk five or so on. I I don't know whether to consider it the root cause or the side effect, though, maybe a little bit of both.

    I think that it's a phased thing. I believe that, you know, when, when we start looking at at, you know, gotta get on the right, when we start looking at, you know, an architecture, I guess, like that, you know, more like this, then all of a sudden, you know, once you've built that in, then it becomes easier to bring in alternate hardware is into that, into that mix, right? So it's sort of like, Hey, I'm building this containerized workload. I have better access to the hardware. I can deal with heterogeneity, because it's not, you know, the platforms, the container platforms, don't care to the same extent. Would you care? Would you characterize this, as you know, from the point of view of ops and ongoing well, both development and ops? Yeah. Are you would it be fair to characterize this as a, you know, a new kind of abstraction, not a different one, not a replacement for VMware. But they have to rethink what the abstraction is that they use, and what the abstraction hides from them or makes it, makes it doesn't or doesn't. End to 10, doesn't hide. This is there's a and actually there's a note in here. Ah, okay, there's a different I've actually been making the set this the second point less. But what one of the there's, there's a, there's some network isolation issues. There's some there's some limitations, both on containerized workloads and containerized VM workloads that we're very focused on, like processor or GPU or TPUs or specialized processors. One of the things that the ops teams don't have a good answer for at the moment, that VMs have done really well for them, is create some network isolation so they have different issues. This is the thing about the schism with the ops teams. They have different issues than the workload teams are doing right now. There's a lot of friction, because the you know, ops teams don't really they're not very fast moving. They have a lot of you know, issues they bring up and the workload, teams will just move to cloud, where they're going to get a yes on things and and by the way, the ops teams, you know, their inclination is, you know, especially when it comes to networks, is nail that sucker down and don't change a bit. You know, you know, once you once you have something that at least arguably works, you know, well, no, but that's with it. But I think this is, this is the this, there's a separate culture thing that we're all getting to, actually, which is the ops teams enjoying this. I've locked it in. I don't change it. I don't have to keep up with things. I think that generation is, is, you know, it's not, that's not a sustainable model for ops teams anymore. A lot of teams don't realize that at the moment. But this idea that you can, you know, sort of, sort of put, take your box and say, I'm not going to make changes to it, that's, that's dead, or you're just not. You shouldn't run infrastructure. Where are you seeing it successfully get introduced and by what mean? Go ahead, Claus and I'll give a customer example.

    I would agree with Rob on it being dead, but not necessarily, because the the interest is is in being flexible, but more because the static infrastructure has been increasingly become a target for server attacks, right? And so it is no longer affordable to become, to stay a static target. And as a result, the the opportunity cost to switching to, let's say, a software defined network, or something like that, is now lower again, not because SDNs are lower cost, but because that the cost of your static infrastructure is rising.

    So you're talking about lower risk. Yes, thank you. I think that's a good point. This is, this is, to me that, you know, one of the disruptions that we're seeing from cloud, right? If you're going to run infrastructure, you would better be, you know, operating at in a much more nimble, dynamic cloud approach. And nothing is, nothing is, even if you think you've got something stable, you don't have it, especially if you were working with any specialized gear or GPUs. Or, you know, those things are. Those things have higher refresh rates, higher update rates. They're more temperamental because they're not, you know, you're running them in different ways, or you're running them at higher, higher utilization rates the you know, one of the things that we saw, and I wish I could talk more publicly about the customer and the scale that they're doing this, but one of our customers is running A reverse auction, and they've got the reverse auction up to four vendors, and the discounts they're getting off of list price are unbelievable. Like, like, I don't even meant I don't even I. It's what for servers and infrastructure, because they can bid for vendors against each other. Yeah, they're able to get when you're not, I don't, I'll give you that they're getting, I'll say over 80% off of off of list in their infrastructure purchases. Jesus. Now they're making big purchases, but really big scale, right? The big scale, yeah, but so, but it is, you know, the idea that you know, you know, the industry loses quite a bit by thinking that they're going to pick one one vendor, and these vendors are not that different. Pick one vendor and then have to stay with that vendor and be static on it. Whether it's CEO, OEM, CPU, GPU, right, it's the lack of heterogeneity is really expensive, really, expensive for these companies, and it also has supply chain risks and innovation problems and all sorts of stuff. The nice thing about the they're not doing it for this reason, but the nice thing about the containerized workloads is they do they have more tolerance to variations in hardware and therefore heterogeneity is a bigger, you know, a bigger option, certainly hardware. And heterogeneity does Yes, hardware, hardware, hardware heterogeneity, even though, in some way this is what's funny, right? The difference between a Dell and an HP and a super micro server is real, but still, you know, yeah,

    but think about, you know, like, sorry if I'm jumping ahead of you rich. But go back 20 years, okay, when I was at Celestica, which is an EMS provider, and you have a massive quantities of shit going through production. I mean, on multiple lines for multiple vendors, what you watched was dims, Sims, all of the little jelly bean parts, because that's where your efficiencies of scale came in. Yeah, and it's very easy. There used to be a publication, and I it's under a different name, now called I supply, and you can actually track the piece parts, the Jelly Bean parts, to all the boards and all the major component tree in the hardware that people were buying. And whether it was contract manufacturers or OEMs, they buy such vast quantities that you you really are, are paying quarters of pennies on $1 and the same thing is that it went away for a while because of supply chain uncertainty, and now it's coming back again because people thought things had stabilized after The pandemic, when in fact, they have not. This is what, you know, there's a certain eye company that we could talk about. But irrespective of that, if you go back into we're get, we're going to get to exactly the same place very shortly. And now that we have the risk fives and the other products coming in, people are going to start going down that road, and you are going to see the heterogene heterogeneity. The question that I have is, is it going to stay tied to the price of the Jelly Bean parts or the bigger pieces, like the GPUs? Because that's where the higher cost comes in to the overall make of the unit, whether it's a piece part or it's an actual server, that's where it's coming in. Who's challenging Nvidia

    AMD at the moment, but barely at the moment,

    but that's exactly the point. You have one dominant player, but you have TSMC, you have all of the other vendors starting to come up to that and trying to manufacture to lower the cost. So I see the heterogene heterogeneity argument being 26 and what you're going to see people buying the CSPs are not going to go down that road. They have too much invested in the lock ins to make those changes. But the enterprises that are repatriating off of cloud, off of the CSPs, those are the companies that are going to be very, very interested in this, because they're already planning for it, yeah. So sorry for the rent.

    No, it's I this is definitely on point. This is, this is the, you know, Titanic forces, tetonic forces, maybe. That are, that are shifting the industry here. And I guess with some of the cloud vendors, they're Titan, Titan, Titanic forces also. But the the, yeah, I see this as a, you know, it's a very, you know, it brought, you know, Broadcom, in some ways, was reacting to something, I think that was going on where they were trying to grab and retain margin on a shift that was, was sort of happening, and they're accelerating that shift in the background. But I think this was, you know, an inevitable transition in market. They just, they just split a fire under people's need to accelerate it.

    I don't know if I can attribute that much foresight to Broadcom. To me, it feels more like they just metastasized, and this is like the they're sleeping in the bed that they made themselves.

    I agree

    with. And if anything, they're they are pursuing a by a business model, a sales model, pricing model, packaging model that favors and lives on the back of homogeneity they want to basically buy stuff with even more administrative over, more overhead cost. You know what they used to call shelf wear, right, right. No, it's but. But I think at the end of the day they they analyzed correctly that there was more opportunity to capture margin from VMware than VMware was achieving, in part because VMware strategy was sort of classic Silicon Valley, like we we need to invent the next thing right, which was all the tan zoo and all this, these efforts They had and and and the abroad comes just like, No, you don't we gotta. We're gonna make, we're gonna make a lot more money. Like, when Nike gets into the into the the the instead of making the shoes, they start having the fitness trackers, soft drinks and, yeah, and, oh God, too many things on the menu, quick, but just a question, going back, if you know, given this, what does that say about the containerized workloads? What needs to be added to them, or what? What enables what are the factors that enable the move to this containerized workload model that we're talking about maybe a year, two years out, is there, are there functional aspects? Are there particular architectural aspects and rich. I'm going to suggest we wrap that up as a Valentine's Day present for next week, because I think that that's a really good topic. I added it to the list. And we'll, we'll, we can take that on, because I think that, you know, that's that comes back to some of our Kubernetes questions. But I think in this framing, there's a there's a I like this framing a lot better, because now we're talking about enterprise, it buyers and, yeah, they need and what? How like we've we finally decomposed the container movement. What is this next generation of ISB ven you know, the vendors need in order to make a realistic business of delivering using containerized

    workload, right? Well,

    here's the other question, and I didn't see it in the deck, and maybe it's there, so maybe you'll put it on the list. How much of this is driven by the need to reduce power consumption, an ESG type, method,

    decision factors. I i can tell you here, this is, this is, it's mentioned here, but it's not really as big a deal. It's a pretty big deal, like the the customer, the customer that I talked about doing the reverse auction, is consolidating a two for one rack consolidation. They're probably going to have more GPU, but they're going to have probably more power per rack, but half the number of racks so power and they're out right there. They can't, they can't expand in their US footprints. And I don't think right that that's it's not limited to any one of our customers. The ability to expand in their footprints is pretty limited. So if you. Show up with a with a story that says, I'm gonna have, you know, less virtualization overhead and more flexible. You know, footprints for this, and that's a big deal right now. You're buying, you know, they're buying the machines that run a certain way, regardless of the workload. If, if you know, the power piece will come back into it. Now, I'm answering stuff from next week, but I think the power piece does come back into it when they start saying, wait a second. Do I need to run high, high power consumption workloads for these pieces? Can I be smarter about how I balance that out. Kubernetes has the capability to do that type of of balancing. I'm not sure very many people do it,

    but,

    but you would think that that would be one of the business drivers, the cost containment side of the business driver, as well as the ESG side to give them the purpose to do it. Yeah,

    I would, I would think that when you know, and I would love to see some data like this emerge, if we can show higher utilization rates, I think, I think they're going to be get more excited about utilization rates than power. So if, if which is which is very comparable. But if you can say, I'm running my machines and it's, you know, VMs, because we partition, we don't over subscribe RAM, right? The CPU utilization is, role is, is constrained in these environments. And so with containers, it's easier to over subscribe the RAM. You don't have the same constraint from that perspective. So it's much easier to ramp the CPU utilization higher, get better utilization of your investment, right? It's the utility. The value utility is higher in these container systems,

    right? This is I think about,

    because I'm not sure that I 100% agree with you, but we can table it till next.

    I don't think they have the data yet. I don't so I don't think they're making the decision. I do know this was fascinating because I was listening to the Amazon outposts conversation, Amazon full stop, Amazon, or AWS, does not over subscribe RAM, so, so, but that means that they have a lot of systems, or CPU so they, they literally, when you buy a VM from Amazon, it is Not, it is not actually oversubscribed. It is a carve out of a machine to a certain amount, right? So, you know, when you look, when you come back and say, Wait a second, if I can get out of the ends, then I can actually share the resources on the equipment I'm buying more effectively. And that, I think, but, but nobody's, you know, the infrastructure conversation we're having those that isn't in the in in the analysis yet for anybody,

    because it should be,

    they're still thinking in in terms of VMs and VM workloads, yeah, and they, they don't have the data to show that they could oversubscribe it.

    Well, you would think, if anybody would show it, it would be the cloud providers.

    It's risky, though,

    correct, true,

    right? My issue is not as much on over subscription, but on having incredibly fixed assignment buckets. When people are using VMs and VMs in the cloud, which is how a lot of those, even the containerized services, are run everywhere. They are literally building tiny little they're taking big boxes and covering them to little boxes. And that's, the, that's the that's the story. There's they're not there's no Dine, there's no dynamism in there. There's no dynamism. Yeah, I feel

    like they're they're offsetting the lack of over subscription with Spot Instances like that. They're the Spot Instances allow them to make much more effective use of the of the carving. So reducing the carving overhead,

    I think of that as bin packing, not over, not so,

    not necessarily new.

    It would also imply a new a new approach to the market of that the availability of of basically Spot Instances, it would start to look, it might end up starting to look more like the way power is bought at, you know, kind of at nodes, and, you know, adjusting to the. The local the local demands,

    so spot instance have smart instances to fill the over the the overhead. Have an advantage over bin packing in that you reduce the risk of a node outage.

    Upsetting too many customers, and you're you're maintaining SLA, yeah,

    so, so with the Spot Instances, you can fill the remaining capacity, but you can still prioritize the primary VMs on that note,

    and I guess my point there is for the CSP, that's going to imply a fairly radical departure in the way they price and the mechanism by which They price,

    right? We're gonna pick this up. I'm out of time. I need to jump talk

    to you all next week.

    I have been amazed at the ongoing components that we keep bringing together about how we are building an ecosystem, an ISV ecosystem, around Kubernetes, but the broader spectrum of what it takes to adopt these technologies, what are the forces that shape how these technologies fit together and work and that this has become really a multi session series, as many of Our discussion topics are, if you're interested in joining us, please do. We would love to hear your opinion, your experience, your questions about these topics. You can join us as always on Thursday mornings at the 2030 dot cloud. I'll see you there. Thank you for listening to the cloud 2030 podcast. It is sponsored by RackN, where we are really working to build a community of people who are using and thinking about infrastructure differently, because that's what RackN does. We write software that helps put operators back in control of distributed infrastructure, really thinking about how things should be run, and building software that makes that possible. If this is interesting to you, please try out the software. We would love to get your opinion and hear how you think this could transform infrastructure more broadly, or just keep enjoying the podcast and coming to the discussions and laying out your thoughts and how you see the future unfolding. It's all part of building a better infrastructure, operations community. Thank you. Applause.