Rob, Hello, I'm Rob Hirschfeld, CEO and co founder of RackN and your host for the cloud 2030 podcast. In this episode, we start talking about walled gardens and the momentum and push that causes us to get into vendor active environments. This is going to be a multi part discussion where we look at the drivers of AI in the future. In this case, we used up a lot of time before this recording talking about Kubernetes and what's next for Kubernetes and containers, and how that how that ecosystem has been shaping up. And so this conversation is around the wall gardens that could be broken down, and in some cases, have actually been built taller because of containerization and Kubernetes and infrastructure and how infrastructure works. So that's the background that we had going into the discussion, and then we pick it up on how these ecosystems and walled gardens are self reinforcing, and are there weaknesses, chinks in the armor that will allow us to go back to interoperable standards, I'll let you be the judge of that based on this conversation. Enjoy
what's happening. What we're watching right now is OpenShift, which is the dominant enterprise. One is to basically defining a whole bunch of resource definitions that only work for OpenShift. And, you know, so they're building out a much bigger platform on top of Kubernetes that is OpenShift specific, and they control the lifecycle of all those pieces that does
that lead into kind of what we were, I thought we were going to be talking about today a little bit, which was the notion of of, you know, platform convergence and the and the and the implications there, and what you've just described is kind of an oligo oligopoly, or oligopolistic approach to, to the definition of, you know, kind of the important, the the definition, and then the kind of the enforcement of convention, I'm back, sorry.
Course, we understand, um,
this, this is a really interesting because there's, without a doubt, you know, and that is the topic for the day is AI powered platform convergence. And it was going to be triggered on the idea that Google bought HubSpot, or is rumored to be buying HubSpot. But I think what you're, what you're starting is this inter, you know, this interesting conversation about the benefits of walled gardens, or the sort of the virtuous cycle for walled gardens, where open ship, you know, Red Hat's not waiting for anybody to define these resources that it needs to build a bigger platform. And so it's out spinning up its thing. But what ultimately that means is, you know, it's not Kubernetes anymore. It is defined by all of the resources and services that they've built to make their platform work. Kubernetes is just the framework for it.
It's open shift. Yeah, it is open.
It is open shift. And open shift is a platform, and they keep adding pieces, and part which is, you know, we're, we're looking at it, and how do we, how do we partner and collaborate with them? And even things like I learned today that they've incorporated backstage into OpenShift. So backstage is that open source, that dev portal. They now call it, I think dev portal. I'd have to, I think that's what they called it, developer portal for OpenShift, or something. Who knows Red Hat developer portal, which is just backstage, but I think it's running as a service inside of OpenShift with pre wired integrations, because they know they can get the CRDs that they need to then, right? It's, it's this nice virtuous cycle. So I think that's true from it. Go ahead.
Sorry. Are we? Are we now? I apologize for having to drop off. It was urgent.
Are we now speaking about
the chink in the walled garden,
or created,
if you see a chink, I would love to hear it. I had thought we were going to go a little deeper into the synergies of needing of AI driving the walled gardens to be deeper. Yeah, yes, to be higher. But if you if there's a chink in that armor, I'd love to hear it. The
chink in the armor is really a. Around. I mean, I see it in a very practical way, right? Like if I think about shop floor equipment, and, you know, call it ABB, or call it whoever, anybody. And the fallacy of their walled garden is there is neither responsibility from the enterprise point of view to upgrade, or it's a forced upgrade by the OEM who's providing the license on their firmware that's embedded in that device. Then neither of those work in today's environment where you need to be adaptive and adaptable. In other words, I can't wait on an equipment OEM to figure out how to go from Windows 95 to Windows 11, right? Do it properly, do it securely, and then dump a patch on me, because I don't have a the time b, the flexibility and c, what else? What else is it going to cause me difficulty with right? Because I'm in an integrated environment. Pardon
me,
absolutely. So the fallacy of the world, but the fallacy of the walled garden is, if I don't like what you're doing, because you're not keeping pace with either cyber threat or patches, fixes, upgrades, whatever. I will not buy another piece of your equipment. I will go to your biggest competitor, or I will take it apart, put my own capability in, and you can't come back at me and violating your warranty or your terms and conditions because you haven't kept pace. And so one by one by one, all the equipment manufacturers, and this is not just manufacturing, this is all infrastructure, you know, the grid, the all capital equipment, all bus lines and transportation, this is across the board in a variety of industrial and manufacturing and even consumer goods sectors where this is happening. So the chink in the walled garden, as I see, for for the what we were speaking about earlier, for the Kubernetes side, is if it doesn't get lighter and easier to use, it will be subsumed, replaced, I don't know who
interesting because, because part of where I thought you were going with that was the delivering the software that all of those those infrastructures require switching them into containers actually makes takes out the burden of, you know, are you windows or this or that? Are you on a BIOS, patches, all that stuff, and I would expect, I haven't seen it, because I think there is a burden for getting you know who's managing those containers, just like we had burdens with who's managing the VMs when they start delivering the software and VMs, right? But I would expect all of those manufacturers would would dive into the we're going to deliver the software for you in a container like
crazy. Yeah, they should be, but they're not. And the reason is, is back to the earlier point that I made of not so much the in flight or at rest, but the variety of protocols. Because remember, each one can be a proprietary lingua
and and protocol. Right
in robotics, there's three, four right now, in, you know, PLCs, there's something completely different from every single manufacturer out there. So it's that diversity that's preventing me from saying, yeah, we'll throw it in the container and it can go anywhere, be orchestrated in bottle. That right? Everything is is easier see.
That's what surprises me. Because I would, I would expect from what I'm seeing that having a, you know, cluster is a Kubernetes cluster, or something, or a could be a lightweight Kubernetes cluster still is going to conform to the this is how you get a container. This is how you define a pod spec is would be a fantastic way to develop to deliver some of those capabilities, but I don't think they can assume that the expertise is available in the field to actually maintain that cluster. And now you've now you're back to the same I mean, back in the early virtualization days, the companies were terrified that you were going to spin up VMware and they were going to, you were going to they didn't even ship a VHD. They were worried that you would install in a VM and then, you know, it'd be resource constrained, and your software would fail, and they'd get calls because the things weren't working, which. Ultimately, the challenge with all this stuff is that the people delivering that software don't want the burden of troubleshooting your IT infrastructure, and so they just package it away from that well
they have, but I'm not seeing anybody go to containers, which is part of the reason that I brought this whole thing up. Because, I mean, here, here's two practical problems. When I have, I have a client who actually wants to use containers to do this, to distribute code, put it in containers, make it cloud friendly, etc, etc, and can't get from here to there, because every piece of equipment that the code actually works with is different, and they're end to the m
to the x per mutations,
because every time,
and they can't and They can't make use of of container based processing.
They're trying to figure that out, but there's a lot of unknowns, because think about this. If you anytime you get a new computer and you tweak it to your configuration of how you like it, every piece of equipment, capital equipment, they're doing the same thing. No two. John Deere, you know, machines act exactly the same way. Are of the same version, etc, etc, so,
like four very different contexts this week.
Yeah, yeah. While it is a while it is a memory hog. I mean, I can, I have the use of, kind of the golden master, you know, I can put something in Docker and use Docker on my on my local machine. Have it, you know, basically kept up to date pretty easily. It operates reasonably flawlessly. I mean, I can get in there and tinker with it, but if I get in there and tinker, you know, kind of the the warranty, the warranty is, is voided, and then I have to decide, Okay, do I even need a container to Docker container to do this? But I really find the, you know, personally, the it's not just the convenience. There's a safety factor and a consistency factor. And, you know, having stuff run inside the container, up, get updated, feel, you know, like pretty rely, pretty reliant on that. And, you know, I can't imagine. Well, here's the question Rob raised. Rob raised the issue of, you know, do I have, you know, does this go down a path where whoever's providing containerization also ends up, you know, having a technical support nightmare of of you know, fixing every, every, every individual organization or every individual users, you know, variations on, on the theme, because they can, they can make the tweaks and make changes. It points to a different kind of business model and a different set of technologies that are going to deal with those kinds of variations or kind of minor, minor enhancements, minor changes, to to the to the master, the golden master, if you want to think of it that way, that's a place where I think AI could fill in a lot of gaps. May not be able to do massive re architecture, but if you know, if I, if I swap in some components that are not standard, or, you know, add a new, a new extension, you know, build my own plugins to go, you know, throw that in there, if that can be, if that technical support burden can be managed successfully with AI, both in the development and then the ongoing technical support, then you've got at least some kind of sort of path to kind of normalcy and, and, and a reason for getting into that business. Now Rob might disagree, given that he's he lives this life on a on a, you know, kind of every, every day. Does that make any kind of sense to you?
The conversation I've had four times this week is this idea that standardization is so hard we just give up. That's That's the summary of it that we keep. We keep having this idea that what, what we all can see would be, you know, reusable patterns and standards and helping improve things like, like investing in that, that the barrier to get to that point is so high that the people who are making the individual decisions don't want to make that investment, and they're just, they're, they're just tossing their hands up in the air.
So the result is, I'll adopt this vendor's, you know, custom. Custom
is the way, is you what you're, what you're doing, is you're saying, I'm is now is not the time to draw the line on, on, yet more variation.
I'll go with, I'll go and I'll live with open shift, you know, wherever another year, yes, or another year, or another two years. Well,
it's interesting because it's not even the the open it's what, what we keep running into is doing this in a generic way is too hard, or, you know, so I just don't so like, when we're dealing with the OpenShift install patterns, like OpenShift says, All right, I will create a file that you can use to go build stuff. Or, you know, I'm going to give you documentation about how to build your load balancer, but I'm not going to include a load balancer or any of that, because as soon as I that my toe into that water. I gotta support it. I've got to support it, and I've got to fight, here's the other problem. I've got to fight all of the other customization and noise it goes into taking that into market and so. So what happens is, when you look at the OpenShift piece, it does OpenShift. And then there's a big the RB, dragons line in the in the spec that says, and you have to figure out how to do your own thing, because we don't, we don't have a generic way to reach into that layer and what, what? And this, I'm using a specific example, but I've had this conversation literally, about AI cluster builds, about edge infrastructure, about, you know, the robotics AI, you know, like, what's going on in the field, where people are like, there's too much heterogeneity today. I'm not going, I can't wrangle the systems to reduce that or conform in some way. So I'm just going to, I'm just going to keep walking down the path of,
you know, picking a vendor of,
yeah, or so what you do is you go down the opposite path, and you're just like, all right, I'm all in on walled garden x, right? I mean, we see it with with iPhones right now. You know, living in the Android world is actually, you know, compared to my, my spouse, who's happily Apple everything, you know, it's, it's a, it's a, it's a harder world to walk in. And that's still a single vendors, mostly a single vendors,
product. Well, yeah. So,
okay, so the second part of this non hypothetical, real life example is that heterogeneity at at a point in time becomes a lack of interoperability
and shut stuff certainly, certainly becomes complexity. And if you, if you bought into it on the on the promise of getting it personalized, and I can, I can make it conform to and do exactly what I wanted, because I've got all the piece parts. And those piece parts, at least, have been promised that all those piece parts are going to work together, and then you find out they don't. They do.
And this is what's
more like. It's more like I have a green marble, a blue marble and a red marble, and each of them are are walled gardens, and you've told me that. Okay, so I hate the software in the in the red marble garden, but I love the hardware. I hate the hardware and love the software of the blue marble. And now I'm being introduced to a green marble where I kind of love both, but I have 10,000 1000 red and 25,000 blue. And how do I make them all interoperate with each other? Because if I don't, people could die.
And literally, and the vendor and the vendors are disincented from the interop exactly right? And then, and then the topic that we'll get to, because I put this back on the calendar, because we approach this from the container perspective, which I like, what I want, what I was actually looking to do in future conversation is look at it from Ai data. Yeah, because, because I think that that the idea of your, of your and I'm more already hitting, hitting this. I'll give a very short story to tee us up for the future conversation. Maybe I'll just make this the next I'm going to move bounce things around. I'm going to move this. This continuation is next for next week, and then I'm going to move vectorization to digital twins. But so here's, here's, here's the story. So I was at glucon, there were several companies that were offering to create slack bots for for me, they trained on my data.
All you have to do
is give them access to your Google Drive and your slack and your docs and your GitHub, and they'll build a model for you, and then you can interact with the model. I grab the same reaction I did. And, and I'm like,
you know, the idea of what universe am I? And what universe am I going to do that? And, yeah, show me how many times you've done this before, and with what end result.
By the way, tell me who you are to begin with.
Yeah, I think, I think our customer contracts would would freak out if that was happening. But
my data is, is so fraught with so many things. It's, it's so important. It is the most. It's, it's, it is, in my mind, it is the most. It's the most vital thing to be paying attention to right now. And that means everything from, you know, data governance and you know, all you know, vectorization, what you do at the at the base level, to, you know, for curation, all of these things. Yeah, absolutely
so. So next, next week, we'll talk about this. You know, how I how the need to pull those things together, I think is, is going to create some, some consolidation and market. Let
me ask you a both look question. I mean, everybody is highly dependent upon vectorization, but everything that I'm experiencing, you know, hands on, tells me that relying on even the cool tricks that you can kind of do with vectorization, it's just not sufficient, and that, in point of fact, there is some, there are some missing ingredients that are going to get me from kind of an 80, 85% solution that I can build from just pure vectorization up into the 90s, the high 90s and what. And the thing that's that's missing is, I mean is, well, I don't even know how to quite characterize it, but it's the kind of bump, the kind of amplification, or the the enhancements you get when you either add ontologies and and structure to some of the kind of the models that are being used, the models of reality that that the llms are using. So it could be, you know, graph databases and knowledge graphs. It could be a variety of other things that have to be thrown into the mix here, because by itself, vectorization is not sufficient. It's absolutely necessary. It gets you, you know, 80, 85% of the way there. It's the basis on which we have, you know, you know, the transformers. You know, fantastic. But not sufficient. Good.
Are you sorry? Quick question? Are you thinking? And it may not be the right term, contextualized semantics.
Well, that's certainly one, yeah, that I think that's one approach or one viewpoint on what the what might be a solution or add to the solution. I'm I'm honestly telling you, I I don't know that I can give it a single characterization, oh, that's what I'm going to that's what's going to make the difference. But it is. It is manage. It is amping up. It is amping up context, absolutely, it's providing additional guidance, direction, sometimes guardrails. And that is, without going back to the what I was talking about earlier, about just the raw data. I really think that the ontologies, the incorporation of graph data and graph knowledge graphs, is is vital to really good solutions, but those come with a whole multitude of you know, headaches and cost, yeah, well, cost, but also just, you know, they're they're not easy to they're not easy to wrangle either. So, no,
no, I meant cost in terms of downside, and it's a host of them. But remember Rich, I said contextualized, semantic so a semantic layer on top of what you're talking about,
semantic semantics. Yeah, semantic layer, and then
contextualized on top of that. So you're going to take standard semantic layer references and you're going to contextualize them to be either adaptive or dynamic. That's the only way I can think of that you would get the high 90s that you're after, because semantic layer creation and advanced contextualization will come together at one point, but they are two separate things. That's why, when I described it, I said, contextualize semantics, semantic.
Well, certainly, you know, I'm coming at your definition profitably from a, you know, the data engineering, when I think about semantics in a semantic layer, the the contextualization you you separate them, and I'm not sure
I separate them, because their contextualization only really works extremely well with high degrees of performance and proficiency when it is in situ,
which means it's dynamic. Which, yeah, I mean, basically, you don't, you don't nail the context down, and then, you know, let something wander around inside the fenced in area. It's correct. It's got, it's got very pliable and flexible boundaries and extensible boundaries. And yes, okay, so it's dealing with the world as it is, the dynamic nature of it in situ. Okay, now I understand what you're saying. Thank you.
Yeah, so does that? Does that make sense to you in in splitting it into I'm not sure that it's technically feasible.
Well, yeah, the question is, I'm not sure I know what that looks like. Connect, contextual, yeah, I don't know what it looks like. I don't have a feel for
it. Okay, so, so quick example, and then I promise I'll be quiet. Um, in real time, we are talking about contextualization and semantic layer coming together, where the contextualized semantic that I'm referring to is exactly resonating my words in real time to change the context as the flow of the conversation would change.
Well, I understood all the words. I'm not sure again, I understand what you're looking for. I think what I was referring to is, I can't, at the moment, figure out. What means I have at my disposal to actually implement dynamic contextualization and and independent of the semantic layers issues,
agents,
live, agents. Well, I'm, you know, you're talking to, to somebody who's been, you know, buffed in a pick on, on, on agents and agentic technologies for decades. So I know, I buy that. I mean, I yes and yeah, we're we're not I, I know of no, no agentic technologies that seem to be fitting the bill. There may be a few glimmers that are coming out of some of the most recent research, some of the stuff that Jan lacoon keeps talking about, but also some of the stuff that anthropic has just published, you know, last week about the friendship the dictionary learning. Yeah, no, I I have spent a couple of hours, kind of reading and rereading that paper, and I'm there, there. There are parts of it that, you know, it's like, wow, this great. It's great. And then there's this, you know, the old cartoon. Then the miracle happens. And then, you know,
magic happens here.
Magic happens here. Yeah, I'm, I'm still looking for it anyhow. Well, I gotta pick this up next week. Yeah, yeah, but it's the right, it's, it's the right conversation to have. And I'm glad we started it today so I can, I can start to
be ready for next time. All right, I'll see you.
I'll come. I'll come loaded for bear, right? Ooh, okay, see You. Then. Bye. You.
I Well, there's so much to consider in how we're building these systems. And I really do mean what I said about, you know, people just sort of throwing up their hands and deciding that we can't have consolidated, integrated systems, that were stuck building things, you know, one site at a time, one team at a time, one company at a time. I don't think that's true, but it definitely is the prevailing wisdom at the moment. And you know, hopefully we'll figure out ways to do better. One of the ways that I know we will do better is by talking about it and these conversations. Thank you for listening to the 2030 podcast. It is sponsored by RackN, where we are really working to build a community of people who are using and thinking about infrastructure differently, because that's what RackN does. We write software that helps put operators back in control of distributed infrastructure, really thinking about how things should be run and building for listening software that makes that possible. If this is interesting to you, please try out the software. We would love to get your opinion and hear how you think this could transform infrastructure more broadly, or just keep enjoying the podcast and coming to the discussions and, you know, laying out your thoughts and how you see the future unfolding. It's all part of building a better infrastructure operations community. Thank you.