Rob, Hello, I'm Rob Hirschfeld, CEO and co founder of RackN and your host for the cloud 2030 podcast. In this episode, we springboard from deep thinking AI and have a robust conversation about what impact deep think is having on the industry and where we see things going into the dilemma of people building AI infrastructure and working to do that quickly, robustly and with strong governance to ensure that they can quickly update and manage that AI infrastructure that they're spending so much money to build into a broader conversation about virtualization, containers and open shift, just a huge number of topics really crossing the board on the most pressing issues for AI infrastructure and infrastructure in general, for people to consider enterprise, cloud or otherwise know, You'll enjoy the conversation.
I'm assuming we'll talk about deep seek a little bit. First, we rich, rich joined in on Tuesday, and we had class me rich and the RackN team, and we had a very robust conversation about about it actually meant to take the recording because I thought it was really useful. Yeah,
I wanted to, but I had other commitments, unfortunately,
understandable. Well,
we'll catch up the it's, it's a slow moving train wreck. So as much as the market was was panicked.
Well, I think it's called commoditization.
Oh, dear. It's been a it's been a lot of a lot of pieces moving on that one. I'm, yeah, let's, let's. I'm happy to jump in. And then I've been working a lot on this containers versus VMs piece, and I'm happy to talk through some of that. I'm about to write and do a short video on on that. So I'll, I'll talk that through. And love your y'all some put Sure, but deep seek first.
I don't know how much recap. I don't think any of y'all need a recap on it. New, new thoughts on, on where we where we are still, still earth shaking. Industry changing, AI, or is it? Are we people coming off the cliff? I don't think any of us were on the cliff at the industry was, yeah, the market was,
I think that the biggest thing, I'm like, not now there's a market. The biggest thing thing that deep sea has changed, it's really not the technology, it's
the market.
And that's what has people scared or Yeah, at least they're people with money,
that they're chasing the wrong horse, that they're that there's, there's no moat, oh, that
their golden goose, aka open AI, is gonna stop laying eggs.
Ergo, my first comment, commoditization. I think getting into a point of, you know, llms are going to commoditize, and then the agentic layer is going to kick in and and small language or specialized language models will dominate, and then we'll have the data. Set market
as a spin off. I'm sorry, the what market
data set? I've collected a lot of data. It's a specialized thing for X
Industry Training, training this industry, and I've got to keep updating it, tuning it,
updating it, cleaning it, tuning it, racking it, whatever. You know I'm gonna I'm gonna build processes around it, or I'm going to take my processes and leverage them into it, and that will be a layer of of application specific or. Applications, the wrong word, orchestration and choreography. Layers,
the data sets are likely the basis on which you have these smaller, very focused, tuned, yeah, models that get deployed as a as a team, as a, you know, a mixture. And I think there, that's, there's a there's a interesting overlap. There's a interesting side portion of the market that opens up with regard to the ongoing management and governance of the data sets, per se, there's the curation and kind of quality assurance that it's the right set of data to go into the into the yellow, yeah, and so forth. Yeah, it does. I think to to Klaus that it changes the market. It this was not, this was not a technical or technological breakthrough with the possibility that there's, there's one thing that I don't think they're coming clean about, other than that, I think that, you know, we've seen them use technologies, techniques that have been utilized in other by others, and have there's some knowledge about them. The thing that I think is going to start happening is, you know, you're going to find open AI coming after them and pushing the government to post on the fact that they've deployed distillation in a way that has been they will say, you know, you've you've invaded our IP And I kind of sit there and go, you know, live by the sword,
exactly I was about to say that, like, speak like on the topic of data sets. It is as an outsider, it is just hilarious watching some Outland cry about how deep seek stole data from open AI while open AI themselves very blatantly stole data from authors.
Yeah, exactly.
You know, it's funny that you should say that, because I have an article in in whip that is entitled The Pirates of AI. And it's for exactly that reason you stole from everybody. Now people are stealing from you what claim you like, which is, and I'm trying to do it in characters of like, Oh, shoot. The names are escaping me now, of of you know, familiar pirate names of
Captain black, yeah, exactly,
yes, yes, Long John Silver, right. Okay, right. Long John, there's silver. And then there's the Peter Pan character, you know, who? Uh, Captain Hook, yeah. Captain Hook, exactly.
Probably the most quotable of them, off exactly. And Jack Sparrow. Can't forget Jack Sparrow, no,
I cannot. But I mean, I have this list of characters in their personas or things that I can figure out about them, and I'm trying to put them into this article as the Pirates of AI. And I'd like, I'd like the operatic background for the Pirates of Penzance to play the background of it. But that's hard to do.
There's, there's definitely no the the amazement of there not being Honor among Thieves, maybe.
Well, this goes to why you need to trust your data or don't trust your data, and the brokers of data,
yeah, yeah.
Definitely is. I mean, there is an element of standing on the shoulders of giants from using these other llms and then the training. But, I mean, that's how things are built, but exactly, and I think it's what's interesting, What's brilliant about it is that there's less technology novelty, or less technology innovation, and more. I. Business Innovation and and, yeah, choice, choices regarding revelation of the weights and and biases. Yeah, to the degree it's open source that makes a big difference. Yeah, and to the degree they have encouraged, or at least not stood in the way, of fine tuning of the distilled models that are being distributed by them, not by outsiders who are distilling them or quantizing them. There are a couple of modest sized local versions, one built on, one built on llama and the other built on. There are, are three. I think one's a 7 billion, one's an 8 billion, and in both cases, was taking a look at them yesterday after our conversation at noon and or and on Tuesday. And both of them are readily usable. Is as the basis for a fine tuned for a fine tuned local model that's pretty damn performing.
So is this next year's tick tock argument? Well, there were, there was orders about how it was going to be like anything government. You couldn't, you can't use it for the fact that IBM incorporated it right away to watch ni x as a model that could be used. I found very interesting. And yet all of the government decided no, not going there. It's Chinese. And my response to somebody on Twitter was and kanhei Fauci to YouTube,
well, I don't think anybody you know, nobody outside of you know, the insiders, really has visibility into what else is in there and what what has been done. And you know at what point things kick in, the one point that I continue to think they've they've really added to, with regard to the technology, is the way they can create a really large number of experts in the mix and have them communicate with one Another without an enormous amount of overhead. There's something, something about what they've done. Um, it may have to do with the other point that got raised on Tuesday, which was that they basically threw out the use of CUDA and just nailed all of the parameters down so that it wasn't as dynamic as what you would usually do with a big, big model cloud. Close you, you were mentioning some aspect of that. Or at least, I think it was you and somebody else had a conversation about the kind of the the work around with regard to CUDA,
yeah. So like, instead of using CUDA, which item based, they use PTX, which is assembly type. So I mean it, it still requires an uninvited card. But I think this also heralds the a potential shift away from CUDA as the end all be all or machine learning. We may have gotten to a point where the we're past experimentation phase. So while and it's the same with like scripted versus compiled languages, like with a scripted language, you can prototype really quickly. You can you can build, you can optimize your algorithm, but once you're at the point where, where the algorithm is good enough, you start compiling to squeeze that extra performance out of it. And as deep deep seek has shown that performance if. It's noticeable.
I would say that of the two things that have come out that are kind of noticeable from the outside, the the work around CUDA, and then what I'm assuming is something new, which is the the communication amongst and between the different experts. Expert models think those are the two re those are two really significant kind of deploy innovation, architectural issues, yeah, and then the business issue of, first of all, working around CUDA and potentially opening it up for Other semiconductors, other semiconductor companies and and and other hardware pretty important to the to the industry. Yeah, I think take the point you've made is it's it's about the market. It's changing as as much as anything you
have you had any impact, Rob, is it? Has it come back to you from any of your your clientele? No, not. We haven't seen it yet. But the Enter, I mean, we're, we're doing a lot of enterprise work, and it's, you know, they're not, they're not really at the AI level yet. They're not thinking, they're the people we have doing AI cluster work there. We're not going to, we're not going to see that as a difference. They're going to use the GPUs differently, but they're not gonna, they're not gonna. The build part that we help them with is we're not, we're not seeing, you know, I don't change how so I would love for, I would love. I just spent an hour talking to my team about this, this whole AI builders thinking that they're reinventing infrastructure and not getting, you know, they're not getting the help. They're not building well governed infrastructure. They're just, you know, basically rebuilding stuff that's 15 years old to get systems running. I'm air quoting as fast as possible. Um,
I think the the large enterprises that are finally starting to that dip their toes, did some work with Gen AI realized it's not going to get them from here to there right, and are now trying to be more focused and we're purposeful about what they do with AI, and are starting to realize that, oh, you can't get from here to there to achieve that outcome. Those are the companies that are going to start looking at the infrastructure up and looking for the help, right? Because
we're trying, I mean, for us, we're trying to get to reference architecture as a standard process. I do, I do think, right, this type of disruption helps, you know, they'll, they're feel justified in, in moving cautiously here. Yes, they, you know, they, they're, you know, rightly expecting innovations that will improve efficiency and make make things more standardized for them, for the Yeah, so that that's a big, a big factor. The thing that that we keep the little off, maybe a little off topic, maybe not, is, you know, a lot of these AI companies, the speed of their build, the quality of the operations, the governance, things that they have they don't. They're not worried about at the moment. And so I've been trying to figure out how to help those companies move faster, because they have to build things yesterday. The pace of AI is only going to get worse. So from that perspective, if you're building an AI cluster and you don't already have a story for how you're going to reprovision and update that that cluster even before you build it with high reliability, then then you have, then you have a problem. But they don't see it that way at the moment. Yeah, well, I think that goes directly to that the other topic at hand, which is, you know, how do you if you think about the next generation of vendors delivering piece parts of infrastructure, specific approaches to AI the. Uh, agentic, what have you. It comes down to, how does it get delivered? And you know, we've had this conversation a little bit, you know about the ISB of the future. Well, these are starting to look more and more like they might be a major portion of the ISVs of the future. So, yeah, I think it's, it behooves us to actually have that conversation you've you've talked about and that you're working on. I'm actually after I had a couple of conversations yesterday that drove me to put together, I'm trying to write a one pager on AI ops to try and explain, because this is right. We We keep talking to people, and the operators are at these, at these, AI companies are saying, I'm I'm too busy to get help raise the sharpening the saw problem. And you know, the this conversations I had yesterday were the investors, all that stuff is heavily leveraged. This was a fascinating piece, right? Nobody's buying AI servers. They're borrowing money to buy AI servers. Technically, the banks are buying a lot of AI servers right now. Yeah, yeah, for the investors, before, when have you seen that happen before? Right? It's, it's all the makings of a Yeah, 2099 2000 era bubble. Let me build data centers and lay fiber. Yeah, how strange it's. Somehow I need lots of money to build, you know, buy land, speculative infrastructure, yeah, and so, so, you know, the thing that that stands out to me is, you know, the idea that people are, you know, don't even know what they're building, or how they're building it, or what's what what they've built, you know, to the Investors and to the banks is potentially a problem to the individual. Yeah, go
ahead. Sorry. Anecdotally, there's a large data center company that I spoke to yesterday that is seeing something similar, but what they're seeing also is it's great for us, but it's going to be terrible for our customers, because they really don't know what the hell they're doing, quote, unquote, and they have two $50 million contracts that they're sitting with, and they're very reluctant to take them on, because it's All about two very large corporations who have decided that they're going all in on AI, and yet they don't feel that their technical people have the expertise to be able to even request
the right kind of ask for help. Yeah,
they don't know how. No, it's like, shall I write? I mean, literally, one of them said to me, shall I write the prompt for you was their answer? Like, maybe you need to use AI to figure out how to ask the right question to get the answer that you're actually after. Yeah, there's a huge, huge gap between those making the requirements documents and those that are trying to fulfill them. And the question that they had was, how do we choose this in a way like, you know, we can do this from thought leadership perspective and whatever, whatever. It's not about a marketing campaign. It's about not trying to meet expectations and missing the mark so significantly that you're going to kill yourself in the process and kill your business in the process, right? Because they're pitching one thing. I mean, they're being asked for one thing. They're trying to address the need, but that need they know is never going to work, and when they try and suggest otherwise, it's they're getting pushback on it, which I find is
the data center, the data center company is suggesting that the customer is not asking for the right, right stuff, the right the right stuff, and that their suggestions are not just ignored, but actually getting there's a there's a pushback is, yeah,
it's an absolute pushback, and it's an absolute pushback, not just on the cost, but on The design and what's going to be put into them. And it's like across the board, like, Nah, you guys are crazy, and they're what they're really worried about is two things. One is that, you know, there's a big potential upswing in their market share to be able to do this properly, but they. Don't know how to convince the market that the market is wrong and what it's asking for right, because the lack of knowledge,
that's what we keep here, is this, where's this kind of, I mean, if we've we've reached the point of, you know, Dunning Kruger on the part of enterprise requests. Where, where do you sense this, this expertise, or this, you know, unre un kind of UN
unjust need. Oh,
no, the where does this kind of unjustifiable sense of expertise? And I know what I want coming from. I feel like y'all were eavesdropping on my last conversation, because this is we're talking about the egos, egos of these AI platform people who feel like they have to be the experts in every topic and right, have to build something new and novel, that it's not, if it's not new and novel, it's right that part of part of the the challenge Rich is that it's not that they it's not done in Kruger. As much as it is, the stuff we're building is new and novel, and I'm going to reject everything that came before it, because what I'm building is new and novel, right? So, when, when we So, so this is like I was, I was proposing something that went along the lines of how I got, oh, it's, it's not an AI data center. It's a, it's a world class data center with AI, right? Because, because the data all the data center infrastructure stuff, that's all right, you want the best world class stuff, governance, all those things you can get that today, and then add AI into it. They're building AI data centers they're not worried about. They don't think that they're going to get a lift or a benefit by using anything that anybody else has learned in building that what what you you get a sense of what they are inadvertently or even purposefully leaving out of their requests in these data centers, there's something they don't think about process. They don't think about update, they don't think about inventory controls. They're not thinking about the OS management on top of this, they're not thinking about security. They're leaving out everything. They're they're literally what's the minimum I can do to get my AI cluster running.
And here's the other part, many of them, because I looked at a number of their different requests that they've gotten in many of them are going well if and this blew my brain was well if I go to a off of a cloud provider and into a data center, will I have more security or ownership of my data, but with all the bells and whistles that I get from a cloud provider if I go down this road, In other words, there's like a mixed metaphor of, oh, this is my next version of cloud, but it's going to cost me less, and they're not making the connection. Are they?
Are they specific at all of the bells and the bells and whistles they're getting from the CSPs with regard to
data? Well, this is the other interesting part. They haven't got a freaking clue. Because they go through the bills from the CSPs. They don't know where the optimizations can be gained. They don't know where the overlaps in services are. It's like, it's like in in one case of the two very large companies. The issue this was an RFP that went out, and it was written by the CFO, not by the CTO or CIO. It was written by the CFO, who's looking to save money and leverage AI as their new way of cost saving to get off the cloud, because they're spending $1.3 billion a month on cloud services,
right? And they actually think that somebody thinks that the CFOs requirements are clear enough and clean enough that it that's what should be acted upon.
No, well, somebody thinks that CFO needs an education in AI,
well, but, but, but the CFO is making the request at at the behest of somebody, either it's the executive team, it's
the. See the board, the board, the board, the board wants them to cut back on their cloud costs significantly correct, and that
somehow sprinkling AI pixie dust on it will will make the difference. And so just right
there you go, to which the data center company says, Help us out here. What do we do? Like, what's your best advice of, of countering this? Because we're getting this argument over and over and over again, and I find it very interesting, because if you think about AWS and and, you know, other cloud providers, they're the ones who are kind of behind the eight ball on the AI side, and what they're putting out there is not sufficient necessarily to meet the need of the CIO or the CTO, but everybody's going to cost cut, cost cut, cost cut. And you know this is, this is your ticket out of your Malay, to which I scratch my head and shake it just like you are rich and go, where do they make? Where are they getting this connection that I can't see between, oh, this is going to save me money in cloud.
Why? Because I can cut depth. FTEs. I can cut up FTEs, because I can, you know, have less ingress and egress, cut like I don't get it, but they're not the only, not the only company. Yeah, no problem.
But getting back to your point, Rob and kind of, one of the things that you been focusing on in this presentation, yeah, which certainly has relevance here.
What's what's missing from your point of view, what's missing in the messaging coming out of MSPs data centers. I mean, does this argue the possible creation of a new class of service provider. I'm not sure that it's, you know, an MSP. It's not an MSP as we well, there's, there's a new, there's a new MSP trying to emerge for, for strictly AI workloads, right? The core weaves, right? But they're, you know, those are predominantly single tenant companies that are then, that were spun up and invested in to run infrastructure for a single and then they and then to make the business work. They're trying to they think they're an MSP for AI clusters, but rights and our experience with MSPs is typically not a it's not a very analogous model. They're highly specialized, which matches, but then they typically have a long engagement business, right, where they're living a relationship. They have high expertise in something so that matches. But our experience with MSPs is that they typically don't have the investment in operations and operations efficiency, which I think places them at a disadvantage, right? If you look at the hyper scalers, the hyper scales are very tuned on operational efficiency. You talk about the CSP, the CSPs, the cloud, the cloud providers. Yeah, right. And so what, you know, I've been these AI providers are in a super rush. The urgency is very high. The cost of the gear is unusually high for that. So the the cost for non, for the gear not performing, is high, right? Every every line that there should be a risk free of urgency, there the risk that they're, they're, they're laying on themselves, correct? And well, and you know, it's been so hard to get the chips. So if you have, if it takes you a long time to get the chips, then every hour that passes after you get that actual inventory is expensive. I mean, it's there should be this tremendous drive towards operational performance. And then what we see a lot of times is that the thing that causes them problems is not just onboarding. It's actually the reset and update cycles like these. Ai clusters have high rates of bias change, high rates of OS change, high rates of application change. They have high failure rates that cause you to cycle a machine and update it and and that's, that's downtime, that's that's needed. Downtime. That's AI downtime, that's anything else you're running there, downtime. And so, so what, what I would ex, what I would expect to see is high governance value and and they're, they're, they're the organizations are not mature enough, or they're not thinking about it as I have to run a world class data center, I'm back to this phrasing, right that also has, has these AI capabilities, what they're almost by by the by definition, must have as a high priority and a, you know, lot of attention given to AI support, right? And tell me, Can you unpack for me what you consider included when you talk about governance? Because everybody uses it many different ways. I mean, depending on which which direction, you know, I'm worried about compliance with, with, no, it's, this is a couple a couple of months ago, right? I was, I was working on those graphs where I was showing, you know, people expect governance to cost them something, when the reality is, it accelerates them in the end. So, you're right, there's, there's a there's a rule following component of governance that people see as a liability for this. What I mean by governance is, I mean that you have highly, very high consistency. You have very high repeatability in the system. You know what's going on in your system, and can identify it. And then, if things are out of out of conformance, you have a way to remediate them very quickly. You know how to do that if you encounter correct so, so, so you can, you are the the idea with a governed system is that you're able to drive all of your components back into a known state, a desired state. On your you have a very high known, known, controlled state attainment. I like the like the way I'm saying that that's what, that's what government. Governance is right, okay? And I think people conflate governance with GRC, which is governance, governance, risk, compliance. Yeah, they're separate. They're separate issues that have to be addressed. Well, I think that if, if you, if you're following my model of governance, what I consider governance, then you're addressing the governance, risk, compliance question at the same time, right? The added, the added piece is, what, what I'm talking about is a very change friendly system. And this is, this is, I think, the thing that gets people really confused when I think about a well governed system, it's change friendly. When most people think about friendly, as opposed to nail it down, nail it all to the floor, correct? And and what you're talking about with change friendliness means, just as we've had agile approaches to software development and CICD and so forth, there is a there is a dynamic or continuous process that does affect, you know, effectively, a form of version control on the gov, you know, as the get ups, right? Then, you know, get ups, get ups, you desired state seeking, right? That that actually get ups. One of the things people love about get ups, they don't say it as governance, but get ups is fundamentally a high a highly governed system, because you're, you're you're always pulling the system back up to the spec, I would argue that Git ops, or we'll call it DevOps, has as its necessary analogs, data ops, which is still about Software, not about the data, right? And then you have data governance, if you want to think about that as separately. And then, you know, the the question becomes, where are they likely to get data ops as well as DevOps get ops, and where are they getting or from what sources do they get? A form of data governance that is truly about about the data, and it goes. Goes to your point that, if we're talking about well governed means, able to able to withstand and incorporate changes, necessary changes based on demand or based on, hey, we overshot this mark. We've got to roll it back. Yeah. This is very this is, this is vital. And it's, it seems to me that the the notion of governance may need to be kind of exploded a bit, or it might just not be the right, might just not, might not be the right word from,
well, you know, words that I didn't hear you say, Rob, sorry if I jumped in front of is observable. You're talking about adaptability and change. Friendly is the definition of adaptable. And so from it's not about wordsmithing. It's about using observability as more of an umbrella over governance than data governance being misused as a turn of phrase to incorporate or encompass all of everything, which is how it gets conflated and misinterpreted. Data Governance, to Rich's point, is very different. What you're talking about. I look at the concepts of observability, and if I inject that, I'm observing, I am adaptable. That is change friendly, and that's to some people, antithetical to the word governance. It's more about stewardship. Oh, like government,
you know, this is, this is, yeah, I mean, my way of thinking, I you know, observability is, is part of a of a three part issue, I think, and that is, you have observability, you have evaluation, or, you know, kind of assessment based on What you're observing. And then you have the active portion of it, which is governance, which is at the very least making appropriate recommendations, if not automating the process of adjusting the the infrastructure across the board. So I'm, I, I guess I'm pointing to the fact that there are, you know, they're kind of, there are going to be flavors of observability, flavors of evaluation on what you're observing, and their flavors of governance that you, that you break down. And it's a, it is a a cycle that early on, you're not going to automate everything. You're not going to, you know, let an automaton or a of any kind, automatically reconfigure your your your IT estate or your AI data. But the you know, over time, maybe it'll, it'll creep in there, I guess mine, I'm, I'm still trying to find out who, who's twigged to this notion, and who seems to be building technology, either as a service, open source what's available now that actually pays attention to that, that kind of distinction with regard to governance, because if you don't do it, yeah, you're going to have more of The same thing that you both have described well, and which I think this is part of our challenge of of, you know, sort of shouting into the wind from a RackN perspective is right. From a mission perspective, we're very lined up in helping with this governance and process and making that part of the story. And that part of, part of what what we, because this is what we, where we see the difference between, you know, legacy, current operational standards, and leading standards is this, you know, the way we're describing governance. And what we've what we've figured out is that we have to include that in the product that people don't the let the work to to establish that this is actually what we're talking about with the AI cluster builders, right the work and experience to ass. Establish that type of control is very rarely incented at any organization we've seen, right? I think you know the height, the cloud providers, the hyper scalers do do it. Some of the other hyper scalers have these levels of controls. But you know that's not migrating out, that's not drifting out into the, you know, the because, because, unless you're, unless you're from there, I mean, they're, they, they don't have any, you know, they, those are, those are crown jewels. Those are, those are, you know, that's what operational best practices. You're, you're, you're a parent, or you're, your supposed value, and even if you hire somebody from that organization, it's actually a leadership problem. So you have to fund them, you have to build it, you have to go do it, and it's incredibly hard to build those tools. And so, so we're, that's, that's the sort of thing is that you're, you're looking at somebody who's popping their head up and saying, I know that I could do better governance or whatever, but the connecting all those pieces together and building the critical mass and all the other pieces and then enforcing it is, it's, you know, it's very hard to do. Maybe, yeah, stupid question. But with respect to RackN, what's the boundary beyond which your offers don't go? In other words, where, where is the decision being made by someone else regarding what's sitting on the infrastructure. How you know, using what you're exposing to them, where do you How would you characterize that line? Um, it's, it really is, if it's the bare metal infrastructure, is we that that typically, is the the line. So, like, we won't we, we have to interact with and help install platforms, right? VMware, open, shift, stuff like that. But we're, we're in operating systems, but we're not designing or writing any of that. So, you know, we'll make sure those work better. We'll get the hardware condition. We'll provide information that helps design that or helps make those decisions. But we, we will. We don't write, yeah, we don't write that stuff. And even, like, if somebody's writing ansible scripts to do post configuration. We have a we have a tendency to use those rather than try to own them. And also we draw a line. We support orchestration, but we usually receive instructions from somebody else's orchestration, even it's a weird it's a weird line. So it's very much like I need conformant bare metal, right with an operating system installed on it that you know, and I want to be able to maintain that conformance level. We do that driven. Do you think of it as with an operating system, or is it a layer of abstraction? It's always, there's always an OP, there's an operating system. So the the ability to get an operating system functional on that is critical, whether it's, I mean, VMware is an operating system from our perspective, so there's, it's always conditioned on open shift. Open shift is an operating system. Actually, the the Kubernetes, all the distros, open shift included, they they've all they're also cloud and virtualization focused. Those platforms expect to have a specialized operating system to run those platforms. And then they're even more keyed into it's a, you know, this network, this interface. They're, they're very, yeah, sort of locked in well. And so, you know, part of, part of what, yeah, very specific. And that's been like our challenge with the M where is, you know, VMware, you know, is incredibly hardware sensitive. And so any any changes, and then they they're not particularly good. They're worse than, much worse than Linux and Windows. About, you know, this tool and that tool, and this tool and that tool to get the system properly configured, most of their customers never, ever actually do real configuration of a V of a VMware platform. They buy the standard stuff from the rack. It comes pre installed to fit that exact system, and they don't touch it. Sense in that sense, that's kind of what I'm talking about. When I talk about a layer of abstraction. They're basically sitting on the other side of VMware. They, they kind of know how to make use of it. Yeah. You know, don't change anything in there. But, you know, as we get it set up, and it seems to work well enough, you know, this is what we will this is, this is what we're going to use. And they're using that in lieu of a of a layer of abstraction. That is, that is quite what you just described, is the data center leading, you know that we walk into, it's, I've put VMware on everything. This is, right? This is actually, this will, we'll do this next week. This is the VMs versus container story. So, so there's a generational shift going on that, I'll preview the conversation where people assumed that they were going to abstract out all of that complexity that we've been discussing by putting VMware in place and then nailing down VMware so they would buy VMware and hope That VMware was able to isolate them from the hardware complexity. Yeah, they're, you know, it's not a layer of abstraction that liberates them. It's a, it is a, it's, you know, it will, it's a shackle. But it also enables them to sell whatever, right? I mean, one of
the things, sorry, it's a captured audience. It's
a captured audience. And and it has been great for the industry, because you can certify for your product to run in VMware, or this is like what we're seeing with Red Hat OpenShift. Red Hat OpenShift, for the last five years, sold to people who, you know, assumed a VMware sub layer. And happily, we're like, I don't need to sell you. I don't need to do anything but install on VMware. It dominates the industry. You know, we'll walk into the enterprise. We're going to sell open shift on VMware. I don't have to deal with any complexity on bios, rev, all that stuff gone because they didn't, they didn't need to compete for budget with VMware and and that that changed over and without Yeah, kind of talking Next week before it's right, yeah, um, if one contemplates a target for the ISV that is container oriented, as opposed to, you know, VM oriented, the way VMware is, yeah, then you know, we get back to that whole question of, all right, does that put more onus on or does that live? Would that potentially liberate the ISV in thinking about, you know, what I can, I can go up against the cloud. You know, I can go up against our you know, AWS is databases or storage offerings by delivering directly to containers the you know, the most up to date. You know, hatched ready to ready to rock and roll. And if I have use of an abstraction that you know, keeps, to some degree retains that nature, the nature of the isolation from that, from the from the actual metal, bare metal, and it just strikes me as that's either that's either the place where somebody's going To Win Big or screw up royally, just and just, you know, create smoking craters. I, as far as I can tell, the you know, all of new software is being written for a container delivery model, right? Which means that most of it's being written for a Kubernetes delivery model, right? Right there the VIN intersection on that's incredibly high. What I the event, the Venn intersection. So, right? If you're doing contain everything 100% or is being delivered in containers, and of that, right? You know, 90 90% is Kubernetes,
right, right? What,
what I haven't seen yet, and I, but I expect to see, is the downstream result, rich, and you're implying is, if that's true. True, then we should be seeing ISVs and software being packaged for being delivered in Kubernetes. Now we do see this, and this is the weirdness of Kubernetes. So if an ISV is showing up with a Kubernetes assumption, they're also showing up with I run in my own Kubernetes cluster assumption, right? And so, so when they what you what you don't, what you're not getting is, I am part of your other cluster. Now we're starting to see it with these with and this is one of the things I like about open shift. Is open shift is when they deliver software in Kubernetes, they're also delivering a ecosystem that includes what the community term is a CRD so there's a interface pattern right where you can post your service in a way that's consumable for other Kubernetes services. Interesting. And so the extent to which that happens that allows these these platforms to have an ecosystem. It's actually this is so there's a corollary that I get to when I talk about containerization and virtualization and the shift happening of open shift being the ecosystem partner for these people. So when when the difference, the difference between Kubernetes and open shift is that open shift provides an ecosystem to install more services into Kubernetes, into their into an ecosystem that they're curating. And that's the missing that's the missing piece for what you're describing. Yeah, and so I've got another question for you then, yeah, to the degree that, is there any advantage to the CSP, say, Amazon, Amazon, Eks, next gen, yeah. Yeah. Are they subject to the same kind of behavior of, you know, kind of individual clusters, as opposed to a more a more general they are no, the people do spin up a lot of Eks clusters. But the difference is that Amazon already has a ecosystem for giving, providing services and virtual private clouds and dedicated networking. So there's, there's less of a need. Need in a wow. They already have an they already have a an abstraction that covers the rest of the infrastructure.
What it does, given, though, on, at least on Eks, sorry with Google and GKE is, and it doesn't have to be open shift specifically, but the opposite model gives them a framework for integrating a marketplace into the Coon is platform. So for example, implementing
a marketplace, an internal marketplace for the for the for the open shift. It's a it's a ISV marketplace for this is what's right. This is the reality on VMware. And why VMware so hard to replace is that it has an ecosystem of people who deliver into VMware with Certified Solutions and and the customers right? The enterprise needs a comparable ecosystem to emerge, and open shift is providing the ecosystem beyond right. Kubernetes is not enough. So
if you're running by itself, you're going to need a Kubernetes admin to manage the workloads for you. Yeah, correct. What OpenShift does well, as long as you're not the COVID admin yourself, and then it kind of grinds your gears, but, but what OpenShift does well for someone who said, who just says, I want, I want a cluster, and I want to purchase the software and I want it to run on my cluster is, is basically you enable it, and then the open shift operator installs the application operator, and the application operator actually installs it and configures the application for you. So it's one click. It does everything for you, which, again, as a consumer, as an end consumer. It's great as an admin. It ends up being a nightmare, because only one way, and if
you have you siloized All of your your your everything that's coming in in that form you know, directly into the cluster. Basically created very difficult buckets that you have to try to figure out how to coordinate. And as an admin, you have no room to adjust it they I think if it,
if the out of the box way, if the box configuration, not, sorry, not configuration, if the box method for installing the software works for you so you have no compliance requirements that that block it, then it's great, like as an end consumer who doesn't have to care about the integrated details it's it certainly saves them a lot of work. The problem is, once you start hitting those edge conditions, then open shift becomes a liability. Instead of a it
becomes a liability. And the problem ends up long, you know, residing with the administrator, who is the same person that's getting tagged to manage governance, as we were talking about it earlier.
Yes, it's a click up solution. It's a great click up solution, but, but again it, it again, it has its niche. It. I mean, its niche is exactly the way it sells, like you install it, it works. And as long as the workload is available in the OpenShift ecosystem, it works great for you. If you don't have to worry about the details, then it is a reasonable, good choice, yeah,
but it's also limiting in the sense of, I mean, doctor still does this, but
the No, you know, there's companies that we deal with write a lot of software. Right, but they also buy a lot of software. And so the issue isn't can I get software packaged in the container, it's actually, can the people I'm buying the software support and maintain that software for me without it being ass Exactly. And so the thing that you're getting without it being, I run, can I run right? We're back to ISVs, can I run that software? We had this problem all the time, because we're talking about the customer saying, Can I run this off? Can this right? Can the customer run the software? Can or, more specifically, if I'm an ISV and I write software, how do I get my customer to run it without creating a support burden for me? Because the benefit of ass is off everybody's plate. They're there in I think, is you've you've hit, kind of the between the three of you, you've done, I think you've collected the the advantages and disadvantages. And you know, if all you're doing by going to Kubernetes as the, you know, delivery target for ISVs is, you know, using, basically, replicating the model that happened with VMware, which because, you know, completely, completely self contained. And you know, if they're making, making too much money or saving too much effort by doing it that way, then you know, then it's, it's, it's going to be, it's going to be a repeat.
The selling point of open shift is that you're replacing an unknown OPEX with a fixed value capex, like you pay for it in advance, and then, you know, okay, I don't need to spend money maintaining it, because it just drops. So what
you're, so, what you're, what you're saying, there is, is more around predictability, of, you know, and of, of, and the OP, ex, capex issue. Okay,
interesting. Um,
yeah, it doesn't address the problem. The the question that Rob just asked is, you know, can I, can I manage it? Can I operate it? It's, it's a, it is the CFOs. You know, fondest wish to have a, you know, a very predictable, you know, bill at the end of the the year, or the end of the month, whatever the period is. Thank you. Thank you that. Been helpful for me. We'll go a little bit more next week too.
Just to be fair, like the OpenShift experience can be replicated on COVID ads. Like you don't need OpenShift. You could, for example, use GKE all pilot, in which case the infrastructure gets managed for you, you can use GP markets place to install workloads there for you again, it's the open shift framework, like it uses the same kind of lattice in or processes in the back end. It just not open shift brand, right?
Cool, all right. We're a little over so I'm gonna let us wrap this up. All right. Thank you, everybody. Cheers.
Wow. It's really fun to be diving back into Cloud, 2030, conversations, recognizing we're halfway through the decade. This is really our first deep conversation for the year, and I am certain it will not be the last. Join us as we help discuss the forces shaping the next decade in computing, automation, infrastructure, AI and it and society as a whole. In many cases, you can come in be part of our conversations. We would love to hear your perspective, your insights and your questions. Thanks. Thank you for listening to the cloud 2030 podcast. It is sponsored by RackN, where we are really working to build a community of people who are using and thinking about infrastructure differently, because that's what RackN does. We write software that helps put operators back in control of distributed infrastructure, really thinking about how things should be run, and building software that makes that possible. If this is interesting to you, please try out the software. We would love to get your opinion and hear how you think this could transform infrastructure more broadly, or just keep enjoying the podcast and coming to the discussions and laying out your thoughts and how you see the future unfolding. It's all part of building a better infrastructure operations community, thank you. Applause.