20240507 silos vs systems

    3:46PM Jun 15, 2024

    Speakers:

    Rob Hirschfeld

    Keywords:

    kubernetes

    organization

    infrastructure

    challenge

    build

    work

    people

    tools

    open source projects

    run

    kubernetes cluster

    curated

    standpoint

    create

    talking

    vendor

    install

    ansible

    open

    containers

    Rob, hello. My name is Rob Hirschfeld, CEO and co founder of RackN and your host for the cloud 2030 podcast in this episode where Tez Reed and I have a in depth conversation about the challenges of propagating technology inside of enterprises, this core challenge of selling silos and individual technologies. What, what martes describes as beneficial tool sprawl, versus building up systems and integrating things and end to end technology, what I've been calling infrastructure pipelining, we actually break down what's going on in the street related to open source technology, Kubernetes, other aspects of what's happening and how things are fit together in a really interesting and dynamic way. Know, you will enjoy the conversation.

    We covered the Apache Corp stuff last week, I think, pretty well, and I haven't seen any better analysis than we did. Say the truth, Martez, I think your commentary was spot on, but still

    very much, still very much early days, and we'll see how it all shakes out. But I think it's, I think it's part of a honestly, a part of a broader industry wide conversation that I think the the industry is going to have to reckon is what is IAC become in that like I've seen a lot of other tools, like things like wing Lang, or some of the others, where there's a desire to push it more into it depends on how you view it, more true code, so to speak, to be able To utilize Golang and Python and TypeScript, similar to pulumi. But a lot of people are touting the idea of certainly tightly coupling the app to the infrastructure as code capability. And the thing I often wonder is, okay, you create this capability where your app and your your infrastructure definition are very tightly coupled. What about all the use cases in which the organization doesn't build the app? And I think that's always the thing that I often see when people start talking about DevOps and all of the cloud native architectures and things like that. And it always seems to have a very developer slant to me, in that I think people think that the majority of apps are developed in house, and that thinking about the millions and millions of businesses that run apps that are developed by a third party, in which I still need to have infrastructure for and still need to be able to successfully deploy the application.

    I agree with you, we, we sort of get very tied up in the developers, developers, developers, and a lot of a lot of people's time is actually spent in maintaining or using something that people other people, other people have written. Do you think that changes the conversation from an IAC perspective?

    I think it, I think to some degree, it opens the aperture and sort of course corrects it in that there's actual conversation about the right tool for the right job, and so that gets you into almost the conversation of people talk about tool sprawl. And at its core is that necessarily a bad thing in that the tool that might work for you and the use case that you have within the organization may not work for me and the use case I have within the organization. So is it necessarily inherently bad that we use different tools because those are the tools that make sense for the job that we're trying or the outcome that we're looking to deliver. Or is there really a need to say, You know what? Rob you and Martez need to figure out how to use the exact same tool to get this job done? Because A, we're not paying for two tools. Or B, we don't want to have to document and have our teams scattered and using different tools. We want the synergy of a single two teams, or the organization utilizing a single tool to build out our capability as an organization as I think that's the sort of the reckoning that organizations are going to have to figure out, particularly when you start talking about in the broader landscape of things are only getting more and more complex. X and honestly, from a personal technologist standpoint, it's becoming daunting trying to keep up with the next thing that's coming out, the latest thing, the next best way things should be done,

    especially when they require you to throw out the old architecture. Yeah, that's true. I so this is Isaac was showing me a slide the other day and asking from from one of my decks, and asking me to explain it. And it's saying that it's the same thing as what you're saying, where there are places where I think sprawl is normal, and there's right where somebody might have a team preference, or a way to do things, preference. And I, and I think there are places where consolidation is actually, you know, is a benefit to me. The line the infrastructure, when our customers deal with infrastructure, sprawl with infrastructure is not that useful, because it means that there's a lot of people doing things in different ways. They're doing same infrastructure slightly differently, and then it's very hard to get compliance and audit and governance and reuse out of that. So that's granted. That's where we play as a vendor. So, you know, we have reasons to think this should be more consolidated, but it doesn't make as much sense that, you know, CICD pipelines for different dev teams or or ways that, you know, maybe an orchestration system has to be, you know, a universal orchestration systems. Matter fact, most of the orchestration systems I see, and I'd be interested in your take from a Morpheus perspective, because it's really hard to design an orchestration system, or orchestrations that are universal for people, right? They end up being sort of departmental.

    Yeah, I mean, and that's the, the primary challenge that I've seen. So even, even at my time, going back to when I was, I was at puppet, the the challenge was, like, you would have these forge modules that were essentially the pre curated, the existing piece of collateral that people would have access to to quickly get going. Here's the challenge end up being. Was usually after a couple of months, they were like, yeah, it doesn't quite meet exactly how I do things in my organization. And so that's always the challenge. So another example I often look at is you look at in federal spaces with needing to apply with DISA stigs. It's always the cons. The idea of like somebody has already hardened a box in Ubuntu box before. Why do we always need to harden that box in every organization? They need to do it on their own. Harden it with an Ansible playbook, or they've written a script to harden the OS, or they've done this to do it. Yeah, and that speaks to, in many ways, people don't want to take what's already been done to a degree, and obviously you've got, as it starts to get lower

    levels, what you're hitting. I was, I just did. I was just doing a presentation about it's so hard to get people to to reuse work, in some cases, because they're like, Well, I, you know, I don't want to take I don't want to I know how to do this well enough. I'm just going to do it. However. I think I should do it. I don't want to take the time to learn how somebody else did it, even though that would this disperse the knowledge within their organization.

    So that's where it gets tricky, because I have to think so as an example, like, I've been trying to find a new hypervisor for my home lab, okay? And, of course, being in the space and needing to be where I started going down the various different routes. Try proxy box. I'm gonna try OpenStack, I'm gonna try this, I'm gonna try that. And the one thing that made me start to think in a different way or different fashion was even as the problem you see with Kubernetes. Solutions that are built for scale oftentimes aren't necessarily easy or convenient to scale down. And so as example, if I wanted to stand up a single, single host, open stack instance, we got some hardware in my environment I want open stack on it. The install and deployment isn't necessarily that easy, and it's the idea of, yeah, OpenStack scales really well to 1000s and 1000s of hosts. Does not necessarily built from my use case of, I'm only, only ever going to run it on a single host, and so it has to, it has to be built in a way that it's going. Be scalable to meet those other use cases. And so that's often, I think what we run into is the use case challenge. So similarly to what you mentioned, somebody's already done that work, already built it out. But the thing I don't want to have to deal with is, let's say, as an example, with automation, I don't want to have to read the code, and there'd be 17,000 if else statements, because your Ansible playbook is trying to accommodate for 50,000 use cases. I have a very specific use case that I'm looking to check to tackle, and it's probably going to be easier me to just write a couple of lines of Ansible to address my specific use case, or the specific way that I install my SQL or Postgres. And so unfortunately, what happens is, I had that one use case. Now the one use case turns into three use cases, and three use cases turns into five use cases,

    right? This is, this is where we are with all this with, you know, I took that Ansible thing I did the quick instead of, right? Because, because the alternative would have been to say, All right, I'm going to take this community. And so, you know, let's, let's not even go community. Let's keep it inside our corporate walls. If I'm, if I'm a company and I have an Ansible that installs a SQL database, right? And and you go, Hey, this is great. I'll take it and doesn't do quite what you want. You clone it, and then you write you it's very difficult to go back to the author of that and say, hey, I want to inject a variable in here to make these changes so that I can do my thing. And because that person this is where I think infrastructure is, code is misdefined, that person would look at your patch to their Ansible playbook and be terrified. They're going to be like, why I don't really have a way to test or validate if you are giving me a good patch, so I might take in your thing so that you can use my playbook. But now I don't know if that playbook works for me anymore, and I'm not. I don't want to take the time to test it and do it, you know, do all the other work I need to do. And so we end up with this case where the cost of collaborating is it's high and it's often borne by the people who are sort of sourcing the automation for everybody else.

    The question becomes, is there the value there? If I'm the original creator of the content or the playbook, whatever it might be, and you provide a patch. And I never had that use case. I took on additional work that doesn't, in many ways, directly benefit me, right?

    Yeah, no, that's hard. This is, this is what I've been it's a, let me, let me just ask this a question. Do you see it away out of that trap?

    I I do and I don't. I think the opportunity is to continue to redefine the constructs and the objects in various ecosystems. It's like, if you take continue down to the Ansible example, obviously you've got the built in capabilities in Ansible to do things like manage a service or manage a file or manage a package, things like that just come into various other configuration management tooling and capability. The challenge becomes, when I have to start creating those additional abstractions, I need to take those underlying components and create something useful. Typically, the thing becomes, I, I tie it to something very broadly. So as an example, it's usually MySQL. As example, I tie it to something at that high of a level. The challenge with that becomes, usually, it's really not MySQL. It's usually MySQL configured in a very specific way for Red Hat, Enterprise Linux, that's usually what it is. It's not MySQL in the broad sense. And so what ends up happening is, do we go down the route of saying, You know what? Instead of saying it's MySQL, it's production database X for app five, that's what it literally is. That's what it's built. And if you need to build dev database for apps seven, you clone that and you build your own specific flavor. It's almost the idea of that's true. Is there really the reusability of some of those components that we think they. Is particularly when we start talking about things that are infrastructure related, and like code. And so as an example, with when you start talking about programming languages, usually I'm going to have a library that I can utilize or leverage, that oftentimes has a specific purpose. And so that's what I'm able to leverage. And I think we think about playbooks or modules in a slightly different way and less of a library that's going to be used as a component. Obviously, there's the concept enhancement, like roles that can be utilized, but I think which are

    advanced and a lot of people don't understand as part of the challenge, right? Yeah. And

    so I think contextually, I think that's the challenge, is that there's not, there's not a good building block abstraction that's usually well defined in many of these things. Think in many ways you have the same problem, like a TerraForm like, yes, you've got TerraForm modules, but those start to really, once again, blur those lines of my use case, of how I define and how I build out VPCs is different than the way you might do that. And so as an example, can I go to the TerraForm registry and say, You know what I want the module for AWS BPCs is that something I can actually use in my organization based upon the way we define things and how we how we design our infrastructure. So there's the there's the technical challenge, and I think there's just the organizational challenge of, do I want to shoehorn the way I design and I build things into the constructs that somebody else has decided.

    And I mean, here to me, we always have taken the OP said, No, right, the question, in some ways, to me, is not, is that the right thing to do, but how do we make it so that the the the the tilt goes towards? I do want to participate and reuse. It's right. I actually think that in some ways, because it's hard to do that. The tools don't encourage it. They don't really create a lot of incentive for that reuse either. It's all done for you, and there's a ton of out of the box stuff. And then you're back to, you know, very simple drag and drop, which, to me, ends up running out of gas pretty fast, because somebody else behind the scenes is doing all the work, and your options are limited, or it's hard, and they just punt on it, and they're like, here's a reference copy. Go build whatever you know, custom thing you need on the other side. I mean, we were, we came in, and that was part of what Isaac was talking about with backstage, and how Spotify built it was, you know, there's a there's a design that assumes that there's a development team running around building the those connectors for you.

    Yeah, it's tickered Sun with backstage, and it's definitely not for the faint of heart, which I think some of why Spotify had the recent announcement they had. I didn't dig too deep into it, but I think it was this week or last week. I think it was along the lines of, like they recognized Spotify is hard to get going with and hard to manage. So they were trying to create some things that made it a little bit easier from a boilerplate standpoint. And so similarly, with backstage. It's very opinionated. One of the things that I didn't necessarily care for was the the idea of, at least out of the box, from just initial glance standpoint, that I had to tie things to a git repository

    to create new projects. Yeah,

    I mean, it's I get why they went that route. It's just not necessarily something I would suggest for every organization of something that they could actually pull off themselves at some semblance of scale. So you quickly get to situations like that. Of that, of course, I run into every day at Morpheus is the build versus buy conversation, which is really the crux of it, in essence, boils down to DO I DO something that's very unique to me, or do I take advantage of something that somebody else has already built and already curated?

    And that's the No, it's that's the challenge that I've seen all over the industry, how do you end up convincing somebody to take something that's curated?

    Well, I think the the big part becomes realizing, particularly from a product standpoint, how, how best you're going to tie into particularly existing. Environments, understanding that there's brownfield existing, messy environments with a lot of different things. So I think you have to go in eyes wide open, understanding that I'm going to provide a platform that has extensibility for you to be able to tweak and customize in various different places. But on the other hand, there does need to be a need for what it says to be termed as solutions for those that are willing to accept pre curated items and see value there, I can, in theory, take advantage of the extensibility that I've already built into the platform, utilize those things and build you out essentially what be a little bit more than reference examples. They are things that you could take off the shelf and say, you know, I'm going to use this. I'm going to take this. The problem becomes, how do you bridge the gap between somebody says, Yes, I want to use your pre curated content and collateral versus I'm just going to take your platform and build whatever crazy thing I want on top of it, and I think that's the world we're in with Kubernetes, and insert your your platform of choice, the opinionated versus non opinionated situation. I think there's going to be, need to be some course correction from a consumer standpoint, in that you're going to have to start accepting, as an organization, a little bit more of an opinionated approach. And probably can't keep saying, There you go. That's right. This is the way my organization does it, and we have to do it. And so the thing I often tell people is, there's got to be some concessions. You can't say, Well, if you could go down the route of you could build a solution or build whatever tool that you're building on your own and maintain it for the life of its existence, knock yourself out. There are organizations that do that all the time, and see value from it, because it's specifically built and tailored to their use cases, which works well for them. But most organizations can't do that. You don't have the money, they don't have the manpower. The

    challenge is most organizations can't but teams still keep choosing to do it. I agree with you. This is one of the the big dilemmas with this is that, you know, when the benefit of buying a solution is that hopefully, and this is where you have to make some choices, hopefully, the company doing the solution is actually helping do that curation of of of these pieces, and has a way to say, You know what I we, I mean, this is what, this is what we tried to do. But it's incredibly hard to explain. It's we at the same, exactly the same dilemma that you're doing, you're talking about here, which is, we're curating this, so that we do it one way, or we do, we make it work across our customer base. And there's a benefit to the fact that we're curating this, making it consistent, so that not you know you don't have to, you know, you know, you don't have to own it, ultimately, but it can be incredibly hard for somebody who's like, well, we always have done this work. We want to keep doing that work, to give up, to give up, sort of the ownership, or share the ownership, and then they have to learn, you know, this is not free yet. They have to learn the system on top of that, so they can figure out where they can inject the, you know, the special spice that they do have to inject. I don't know it's, it is? It's you know, I watch a lot of companies who are like, you know, very short term, I managed to get this job done faster this one time because I didn't have any learning curve. And then they, they the but then they're, they're constantly, you know, fighting that choice over and, you know, they're making the choice over and over again

    well, and I think part of that of as an organization, coming to a realization that, you know what, I probably need to buy something and, yeah, try and start molding the way we do Things to best fit that product or that solution. Usually, it typically is a need to have more data about how you actually operate as an organization. So as an example, the one thing that keeps coming to mind for me, for most organizations, particularly those that are looking to go to Kubernetes, is what's the total cost of ownership and the investment that you're making into Kubernetes. I doubt that most organizations could qualify that. Most will say that you know what. We need to go to Kubernetes to get more development speed and agility and insert your your productivity buzzword of choice, right? The thing I. Often wonder is, well, you've made a massive investment in not only skilling up people to know Kubernetes, you're adapting a brand new ecosystem of tools and capabilities within your organization. You're moving to now need to support both, usually, VMs as well as containers, and now you're also looking to move up stack from a cultural and organizational shift standpoint, to now have your development teams become familiar with developing applications, typically in a microservices fashion, because many people believe that Kubernetes equals microservices, and so what we've now done is there's this massive investment into Kubernetes. And I often wonder, are organizations actually seeing the ROI Yes, in theory, you can spin up. Let's say I can do my development much faster. You probably could have done that with VMs, if you would have committed the same amount of energy and effort you did to standing up, maintaining and managing Kubernetes and so it's things like that that I don't know that we're ever going to see a massive shift until the industry gets better at actually identifying how they're operating, because then they'll be able to see, hey, you know what? We're spending x number of hours every month or every year, building, maintaining, curating our own content and collateral, versus we could take something that XYZ vendor has created, maybe tweak it a little bit and spend half the time, a third of the time, because right now the challenge is, I think it's very difficult for most organizations to actually realize the value difference between grabbing something that's curated versus spending the time to create something brand new, which is the same dilemma People run into all the time with TerraForm as an example. And I tell people that, like I as an engineer at this point in time, I don't enjoy fighting with TerraForm day in and day out. And so it's, it's a massive time suck that ideally we could fix in a better fashion, but since organizations don't have a great way to track, that essentially falls to the bottom line of like, Hey, I pay Martez XYZ, number XYZ dollars an hour, whatever he uses his time for. That's, I mean, he's doing things productive, but I don't know how much time he spends writing terrific. He's

    very busy. It's hard to measure whether it could it could be used in other ways. Yes, that's true.

    So that's what you quickly get. Does it make more sense for Martez to spend five hours a week curating, helping to curate existing content collateral, or is it fun if he spends 35 hours every week writing his own Ansible playbooks, is writing his own TerraForm modules, all those things.

    No, that's, I think that's interesting, is these ratios that that make a difference. It's funny, because we compete. We have places where you walk into a an org and right people are defending their job, and they're and anybody can make something not work. Has been our that's sort of the ground theory that I operate on. So if you're if you're showing somebody how you have a better widget, and they like the widgets they've got, or they're into your to your point, they're spending 35 hours a week turning the wheel on, on, whatever it is they're doing, but they're comfortable doing that. They can. They can certainly show how your ability to save them time isn't going to work if they want to, and that's a shame, because it could be that they would happen. You'd be spending 10 hours a week turning the wheel instead of 35 and then be able to do a lot better stuff, because most most ops jobs are not just 35 hours of effort a week. They have a lot more. They have a lot more that they could be doing.

    That becomes my question, that I think, that I think has become industry speak, is that, instead of toiling away on these menial tasks, what are the higher value things that you could be doing. And I honestly don't know that most organizations have a list of higher value things that Martez could be doing with the 35 hours a week that he spends doing TerraForm. I think we say that that's

    the part. You're no, you're right. This is the challenge. And this is, to me, one of the big things is that we have a lot of tools that that people are used to using and have added productivity, but they have, they've they've reached a ceiling, and we're so used to filling our time with them that we're not stepping back and saying, wait a second, is this the right strategy? And Kubernetes could be coming up to that ceiling, to where it some of the things that it accomplished. Really helpful. But the complexity, you know, there's probably a degree of

    reuse diminishing returns well, but

    it's not just diminishing returns. I think what you were, what you were highlighting, and we're talking about even with the Ansible and TerraForm pieces, are that there's an architectural limit to from a reusability and abstraction perspective, yeah, that are in, that are inherent in the systems, right? Ansible is very difficult with, and any thing that makes it more modular, more reusable is actually harder, you know, it takes a lot more work to maintain. And the challenge with Kubernetes. The thing with this goes back to the early days of Kubernetes, which is 10 years I kept watching and waiting to see if Kubernetes would recreate the opportunity for software developers to build Kubernetes like Helm charts. And say, Here is my Helm chart for my product, install my product, and now you don't have to recreate that. You can buy a product based on a Helm chart. But it hasn't happened. And, you know, maybe you're seeing it. I haven't, but I don't see companies that are like, Oh, we're just the Kubernetes, you know, I see projects, and actually some of the DevOps stuff is like that. There's a, you know, you put in blanket on the names, but one of these gitops things, you install that, and you hook it up to all your get out stuff, and now you've got a gitops thing. But for the most part, I haven't seen vendors selling Kubernetes enabled products.

    Oh, maybe you got, you got a couple of pieces, obviously, given that we SAS has been a thing for a longer prevalence of time, they're often running that under the covers of a publicly hosted solution. The other part is, when you start talking installable on prem software, there's usually a tipping point, from an industry standpoint, where it makes sense to make that move. So the thing I often hearken back to that a lot of people don't, either run around or don't remember. Now, it makes me feel old. The early days of virtualization, I'm sure you remember Rob. Like Microsoft said, No, don't run Exchange, where that was the thing for the longest service. Don't run your databases in virtualized servers. Like, oh

    my god, we would fight that so much,

    very real. Those are, those are meant to be run on bare metal, physical hardware, and so we're running through the edges in the industry the same sort of transition of, does it make sense? The difference that I would say, and this is often when I talk about people that are like, oh, all the workloads are just moved to Kubernetes. And it's, I think a lot of people feel like it's the same as moving from bare metal to virtualized server. There's another gap moving from virtualized servers to containers or Kubernetes in that it's typically going to be a slightly different architecture. It's different support model. It's different way in which you operate as a vendor to be able to support a Kubernetes version of your app versus a VM version of your app. The other part becomes understanding your target demographic from a buyer standpoint, challenge becomes, is your buyer ready for Kubernetes? And it's still, while there's still, well, there's a lot of talk about Kubernetes. There's a ton of organizations that are nowhere near being ready to adopt, manage, care and feed for Kubernetes. And so do I throw my customer into the deep end and say, You know what? Here's a Kubernetes install. Have fun. Stand up Kubernetes cluster, and running my application in your Kubernetes

    cluster. No idea to run the cloud, but the cloud vendors have made it easy for the developers to make that decision for a company. Right by, you know, you can go and get somebody to stand it up for you. That's what they want, right? The industry hasn't really had a lot of incentive to make Kubernetes the leaders in the Kubernetes space. Haven't had a lot of incentive to make Kubernetes really an on prem on prem property. You know, our teams at Red has summit listening to the open shift pitch. And you know, as far as I can tell, it's still, you know, doing it yourself is not even if you're using a red, you know, an open shift or a out of the box distro, it's still hard. Yeah. Yeah, no, that's, that's, that's a problem

    that goes to the is, is that pain that you want your customer to have to endure, which quickly goes to the perception when you're selling services or selling software or whatever you're selling, if what I now associate with your software is pain and difficulty, that's really tied to Kubernetes, I'm going to correlate the two and say, You know what? This software sucks. I couldn't install it. It was difficult to configure the networking. I didn't understand the storage. They're saying PVCs, and I thought they were talking plumbing, and like all those things are quickly going to be associated with the software you're selling or being sold.

    But at the same time, I don't see, I haven't seen alternatives emerge, right? We were, you know, especially with the Nomad, Nomad, which was the other competitor. You know, I think it's, it's future is in jeopardy. It hasn't, the concerns that you're raising haven't, haven't slowed us down.

    So I think it's a lot of cultural things. I think, unfortunately, the toothpaste is out of the tube because of Kubernetes. And what I mean by that is the power dynamic in many organizations shifted from developers feeling beholden to infrastructure or ops in order to do the things that they wanted or needed to do. Okay? And so now that they've got Kubernetes, they can define storage and networking and load balancers and all the things that were under the remit of infrastructure. And so the thing becomes now is, why would they ever want to give up the keys to the kingdom. In the broader sense, there are a lot that is that are saying, and will continue to say, I don't want to deal with Kubernetes. It's too complex. I want to just write application code. I don't have to deal with this yellow nonsense and all these manifests and all these different things. But I think by and large, there's a a louder group, shall I say, that can will continue to trumpet. I want a Kubernetes cluster, ops and infrastructure Get out the way. Just let me deploy to the Kubernetes cluster, right? And so if you say, You know what? Well, you can use Docker Swarm, or whatever the new fancy thing that might come out that looks to make containerization simpler. I think if it means giving up some form of control, I think there'll be strong resistance to that.

    It does come back to control that, that perspective. So who has to own the the container platform? I mean that this is, this is never, we're it's nice full circle back in some ways, to if I'm running, if I'm running Kubernetes cluster, or doc, whatever you pick, I don't, I don't care, at some point the infrastructure team is going to own the storage, the networking, the computer resources, right? The Kubernetes is the abstraction that the developers are like, I know, or the you know, maybe, maybe even an app purchaser can say, here's, here's the containers I need. It needs storage, it needs compute, it needs networking. I don't actually care how those are executed, but I don't think it's, I don't think it's that simple either,

    well, so I mean, you take a look at the parallels of doctors swarm as example, which I literally took a look at, like, two weeks ago, because I was like, I really want to take another look at container platforms, because I honestly don't want to deal with Kubernetes. There's too much going on there. Feels very open, stacky to me. And so I want to identify something, and you start looking at what Docker Swarm has and had from a capability standpoint, it's very compelling even I've even taken a look at Ed Nomad and done something

    for small, for small, for small deployments. Yeah, yeah. And

    it becomes a case of, why is the instant shift to Kubernetes as opposed to one of these other options and alternatives? Obviously, you're going to look at market share and just voices in the market becomes some of the challenge in that noise moves moves the needle

    the advantage of just how open source gets marketed, and

    the loudest person usually is going to end up winning. So it is. Peak, and so in to me, it would make sense to have a resurgence of a Docker Swarm or a nomad or something else. Seems like all we're at right now is just, I'm going to build the layer on top of Kubernetes, because Kubernetes is the cloud API, or it's the All it is is an API. It's not really just about running your containers. And so there's a lot of that kool aid that's being drank, and people like the idea of continuing to use Kubernetes for everything, but it does have a costly overhead, even

    even now, now Red Hat's actually promoting kubevert over OpenStack, right? That's their that's the thing they're promoting. Yep, open

    virtualization is the what's being touted as the the path forward. Once again, you get back to that, that noise. Well,

    it's funny, because right, all the things that people need for VMware, things that made the virtualization platforms really effective, there are in any of that, in, in those tools, right? The distributed storage, the, you know, external Sans, the networking controls, it's all DVD

    well, and so that's where I think it's it's geared towards a slightly different market, because that's where it gets back to this whole developer versus infrastructure or ops personas or use cases, unfortunately, time and time again, there's the thought that I'm just going to deploy a VM, and that's it, similar as we see with the public cloud, of, hey, I'm just going to go spin up an EC two instance. But what about the 1000 other things that the infrastructure team has had to do to make that production ready for years and years and years and years. That's like, well, I don't care about any of that infrastructure steam. You figure that out. I just need to spin up an easy to instance. Similarly, I think we're at that with cube vert and open virtualization where it's like, hey, it's just spinning up a VM.

    You said you needed a VM. Here's a VM. Yeah. I ah, does make me sad?

    It's, uh, I think until, until there's a bit of a louder voice from the the traditional ops standpoint, I think we're going to continue to be in this cycle of we're going to try to innovate with open source projects and do all these cool and interesting things, and somebody else is going to have to figure out how to actually operationalize it and make it not fall over, be hacked in production.

    Yeah, it's funny. I don't feel like the market is in the same place as it was when we were doing things like Kubernetes, like, I don't see the new open source projects floating through the market the way they did 10 years ago. Right OpenStack, like OpenStack got started in the time when there was certain, like, it primed the pump for these big projects. Even chef and, like, Chef and Puppet, before it, they were, we were doing open source infrastructure projects. I don't, I don't see the, I just haven't seen them in the last couple, you know, sort of post covid, where there's, there's really a lighted on fire open source project, yeah, so, I mean, I would say some of that I'm missing something

    I literally had this discussion probably a couple of weeks ago, is looking at market dynamics. So you look at, let's say, a chef, a puppet, an Ansible assault stack. Take those as an example. None of those are independent entities. Now they're all all acquired. You look at a number of the the other various open source projects, honestly, HashiCorp. Great example is, can you make the actual business work and time and time again, it's almost a kind of, sort of, maybe of actually making that work. And the other aspect becomes, you look at the last, probably the last five years certainly tying around to your covid time frame, there's a massive amount of VC funding injected almost a an absurd amount. That's

    no that's that was, that was a big, you know, people think the open source movement was happening on its own, but it was actually a VC. Funded marketing model for a lot of VCs, right? You know, you create an initial win. There's no, there's no resistance to the sale. It has the veneer of being, you know, vendor, non vendored. And, yeah, I don't know. It feels like that was not as maybe, maybe they're still happening as in the Kubernetes ecosystem. Some it just doesn't feel like it's it's happening the same way.

    I mean, it's continuing to shift. You look at example after example after example. You look at buoyant, which is the company by linker D, they came out with the announcement that they were tweaking some things to try and actually make money. Strange content for a business. You look at a handful of others in that space where it's like, okay, it's not cool now, or not cool anymore, to burn through money working on a open source, quote, unquote, fun project now we actually have to figure out how to get sales and how to how to get people to actually pay us. And so I think that's really turned down the knob and the Dow for what we see, and the, I think the one example that I would really put out there in terms of the VC funding kind of challenge that we experienced was latest work software security company, I think what at the time, when I saw it, I was very startled, but I think they did a funding round to the tune of a billion dollars, holy cow, something like that. And so there have been recent conversations about them being acquired for 150 to $200 million and since there was, there was a lot of,

    that's a that's a down round.

    There was a lot of money to go around. And when there's no in strong incentive to have to figure out how to make money. You can go to all of the popular events. You can beat the drum for your open source projects, all these different things. But when that money starts to dry up or not be as readily available, you got to pivot.

    Well, I think we were seeing that adjustment, and it's funny because I hadn't thought as much about how the funding, you know, it was. There was a ton of people in the Valley who would be like, I have this, this cool idea. I'm going to go do it as an open source. What was it? Tigera? I still see tigera sponsoring bad Calico and but they, you know, they had to, you know, they've made some adjustments and how they they went, but they, they were on a tear. We've works, yeah, we've, we've worked, definitely as hat hit it from that in Mesos, right? Mesosphere, becoming what I to whatever, DQ,

    DQ, IQ, DJI,

    Q. And they went out of business. Yep, they sold assets to Nutanix, and I think some others,

    yeah, one of the ones that's interesting is we did see rocky Linux, and the response to CentOS spun spun some pieces up. And I know that Greg Kushner has been, who's been, we've, we've had him, we've had him on the series, smart guy, but he's still, he raised a bunch of money to do that work. And I'd be interested to see how, how they're, they're managing, I know he's still behind it, you know, they're, they're consulting around a ton of open source projects, but I haven't seen the same,

    you know, we actually chose all the

    as our distro over rocky. And I'm trying to figure out why, but that's, a that's all that whole thing is problematic, and we're moving actually more to embrace the Red Hat pieces, because our customers are, you know, not, not as invested in Rocky and Alma, and they're just Like, yeah, we're just using Red Hat or Ubuntu,

    yeah, this

    is, it's fascinating how, you know, these are all interconnected pieces, but there's a ton of inertia into building, you know, building system, system level stuff. That's that, that's the theme to me, the theme is, you know, the the tool, or really, the platform that starts connecting pieces together or creates a forward benefit to collaboration inside an organization. Should, should be able to win, but there's we've got to overcome these, these, uh, barriers. First, a

    lot of things behind the scenes that go on. I know people often talk about the military industrial complex. The same applies for any sort of business and software where I talk about vendors are incentivized to sell a brand new thing to make more money. Technologists at companies are incentivized to chase the new thing to make more money. Businesses are incentivized to implement the new shiny things, to attract the talent that wants to build brand new shiny things. And the analysts are incentivized to talk about the brand new shiny things, which the businesses will pay them to talk about with them, as well as the vendors will pay them to get collaboration on the brand new, Sunny things that are coming. Yeah,

    no, I that's and that is not often where the real value is being created. It's actually being created in these multi year projects that have gotten, you know, out there a little bit longer, all right, Martis, this was fun. I actually like hearing your thoughts on this. I think we, we covered some interesting ground. So, wow, I really love when we we do actually go off schedule for a cloud 2030 discussion. And even though we weren't able to hit the agenda items that we were thinking, we did have what I think is a really important conversation about what's going on, how technology's propagated, what is pertinent to making buy and use decisions of vendored and open source technologies. I hope this was helpful. If it is, please join us. Come in at the 2030, cloud. Look at our schedule, our back background. See the other things that we've been talking about. There is always something interesting. I hope you'll choose to be a part of it. Thanks. Thank you for listening to the cloud 2030 podcast. It is sponsored by RackN, where we are really working to build a community of people who are using and thinking about infrastructure differently, because that's what RackN does. We write software that helps put operators back in control of distributed infrastructure, really thinking about how things should be run, and building software that makes that possible. If this is interesting to you, please try out the software. We would love to get your opinion and hear how you think this could transform infrastructure more broadly, or just keep enjoying the podcast and coming to the discussions and laying out your thoughts and how you see the future unfolding. It's all part of building a better infrastructure operations community. Thank you.