20240905 Software Defined Edge

    3:44AM Dec 14, 2024

    Speakers:

    Rob Hirschfeld

    Keywords:

    software-defined devices

    AI at edge

    ML integration

    PLC programming

    edge infrastructure

    AMR management

    IT-OT boundary

    device orchestration

    cybersecurity concerns

    composable systems

    network prioritization

    smart devices

    edge processing

    industrial IoT

    safe shutdown

    Rob, hello. I'm Rob Hirschfeld, CEO and co founder of RackN and your host for the cloud 2030 podcast. In this episode, we revisit edge infrastructure and the motivations behind building and managing edge infrastructure with an unusual take. In this case, we ask ourselves, if all of these edge devices are becoming more software defined or becoming more standardized, off the shelf component tree, will that change how we look at managing and running edge infrastructure? Will we shift compute and operations processes into these ever smarter devices, and the answer is going to surprise you. Enjoy the podcast

    today the topic is software defined devices, which we include ASICs, PLCs, switches, cameras. I think we're talking about like fixed cameras do having these things defined in software change how we think about edge technologies. And last week, we added specifically ASICs into this, into this, this metric for software defined devices.

    And the purpose for my suggesting that was the incorporation of more ML and AI at the edge, based in part on small models and actually models that are utilizing approaches other than transformer models to do interesting things, usually quite focused, quite purposefully trained. And that's something we haven't we haven't seen much of yet. And the question is, will we see it or not?

    Is I was the question I had on that, first off is just, is how much expertise, is that the require, or is that there's the expertise to do that training change because of AI, like, when I'm sorry, we talked about the question. Yeah, we talked. We talked about this earlier. And so I wanted to go back and think if, if this does having AI programming assistance make it easier to leverage these, you know, more embedded technologies.

    I don't think there's some experience to know yet. Okay, quite frankly, it,

    it's definitely, I mean, normally when I think about ASICs, or any of these programmable systems I deal, dealt with PLCs a little bit early in my career, right? These are highly specialized platforms with very limited resources, and you have to understand how to program them right and how to operate for them. And my expectation was PLCs would actually move a lot of the ladder logic. And the way those things are built, I'm going to extend that to these other things, is that there, they have very prescriptive programming styles, because they are. It's so hard for humans to do a good, accurate job on it, and the consequences mistake is high. Maybe. I mean, is that thinking about PLCs and ladder logic?

    Well, software, PLCs exist now, and what has changed in them is that the the latter logic, which was based on the mechanical, you know, aspect of the PLC, has now been given. How do I describe it? You can actually do commit more than command. You can do. It's not like, you know, open valve one, closed valve two. Kind of thing. You can instantiate the programmatics into it for a logical flow. So the sequence that used to do by ladder, you can now do as through a CLI. You can also, because the PLC device itself has changed, you have sensors and actuators and all sorts of other things on there that you can now try and grab information from. So I would say it's not just a control. System the way it used to be, but more programmatically oriented. So the PLC cannot just define what opens and closes a valve. It can define, define the rate, the amount of time, the amount of pressure that the valve closes at all of those extra parameters have been built in to the framework for it. And my question, okay, yeah, more object, right, yeah, more or than anything else. And the and, and because you then can put additional sensors, actuators, and, you know, like other sensor devices in you can do a lot more around things like adjusting the frequency of an operation, so you can say close for two seconds versus close for five so you can increase the flow of like a liquid through a pump or something like that. So all of that kind of the parametrics and the programmatics and even the logic of the flow, you can adjust all of that through software and but the question that I was going to ask you before, when I was on mute, not to change gears in your definition of these devices, aside from PLCs, are you incorporating AMRs, avgs, all of that kind of stuff as well? AMRs,

    yeah. What's the name? Mobile

    Robots? Oh, Walmart's distribution center, things climbing up shelves, picking it for Amazon. Pick pack loaders, palletizers, things that are used in warehouse management or raw materials delivery to a production floor. Those would all be part and parcel of the same thing.

    I agree. I mean, we're sorry. Well, maybe, maybe not,

    is the question, is the question one of, do they belong in the same category as an edge device, or do they, or do they need to be what? I guess I'm asking Joanne, what is the what exactly is the question about AMRs, is it that they belong in the same, you know, lumped into the same bucket, or do is your question one of, you know, kind of open to all of us. They belong in some new category, some other kind of,

    well, if the question is the impact on the edge, right? That was good. That's why I'm asking the question, because it would have an impact, sure, as would any other sensor, IoT device, Smart Sensor, smart IoT device, because you were mentioning cameras, that's why I kind of went down that RackN, well,

    it's, I think it's interesting to step back because, I mean, we're talking about how we think about the edge is sort of like, I get very caught up on the front half of this question, and you're right to ask the back half of the question, which is, you know, are we? You? We put this in here for a reason.

    I think you let me ask a separate, separate question that, by analogy, would may help or okay or cause more problems.

    You never know when I think about content delivery networks, right? There's a pretty clear statement that says, you know, the in a CDN, those servers, those storage and the storage that's at the at the edge group, you know, buffering up stuff ready to deliver content to somebody's smartphone.

    The compute logic that's doing a lot to watch the end user of the of the smartphone in order to kind of predict what should i prefetch things like that is the smartphone itself, and what logic is placed on the smartphone by the CDN or in concert with the CDN? Does that mean that the smartphone itself, it constitutes part of the edge, or not? That's that would be the. Question, translate that to the warehouse. Do we consider the AMR as truly incorporated in entirely, as part of the edge, or is it kind of like a smartphone where there's an autonomy to it. There's a there's a distinction in what manages or the basis on which it makes certain kinds of decisions. That was kind of the reason I asked the question. And it's yes, it's a fuzz. It's that fuzzy boundary of, you know, what's the edge and where do we where do we make the distinctions?

    Well, I guess in the case of the AMR, I would consider it somewhat like the smartphone. However, my question is around the fleet management of those AMRs, right? Think 1000 spiders running around, each one doing something. They all are programmed to do certain functions in certain ways. But the management, the fleet management software, could be part of the edge, or it could not just the way the content delivery network a particular film may not be, may only be streamed in a certain way to a certain group of people based on geography, right? So, so it's the same issue, not to complicate your analogy or anything.

    Yeah, I again, I brought it up only because, yeah, that highlights drawing boundaries. So. Rick, getting back to your your question. Rob,

    what well, for the purposes of this conversation, what? What artificial boundary do we want to establish first, before we get into the real fuzziness? Or do we

    right away? I think the the place where we usually have conversations as we come back to it, versus OT and and who's, who is, who is maintaining this. So I think part of, part of the challenge that I believe, got this back on our plate or put this on our plate, was, Are these devices being programmable, moving them from a delivered asset and vendor maintained into an IT asset, where the software is actually decoupled from the device. The software life cycle is decoupled from the device. I think this is, this is the nature of this conversation, because today, most of the devices we just named are vendor specific appliances for all intents and purposes. And when we talk about edge, we have a tendency to ask, what is it going to take to make it this an IT edge. And then I think that the drilling down on that is when the software that we're installing in these devices has a separate life cycle, or the operators own the life cycle that software separate from the device. So you buy, you buy a camera, or you buy a PLC, or you buy, and now the customer is like, wait a second, I actually have to. I want to. Not just have to. I want to own the software life cycle on those devices, because I am adding in a new language model, or I'm integrating it into another device. And if I don't own the software cycle, then when they patch it, my integrations break, right? That's right, right? And so that's, I think, that that's where, where I believe we got to with this. And I think where it gets really interesting is, can these become platform, a platform delivering software onto

    yes, they can, and they should be, because a software defined factory would need to and there's a fine line here, and I have to caveat this, some of the software for these devices and pieces of equipment are embedded. They're EPROM. That's right, yeah, so I have, you know, it's walking that fine line. Can you decouple the software from the device and have the device still function? Is really the issue, because what's in EEPROM may make it. Walk forward or, you know, roll over on its back, if it's an Amr, but it can't do anything else. It can't go to a shelf. It can't climb the side of the shelf, rack, you know, move in, pick up a piece, part, back up, climb down, and go about its business to deliver that somewhere else. But this

    is, this, to me, is, I think, one of the dilemmas, right? You You have a choice to make the device increasingly smart or or offload processing to a nearby system an edge, right? And so we, you know, the thing that's been fascinating to me is, I've, I've consistently expected we'd have dumber devices, and we'd move the processing to nearby, nearby processing. And the historical trend has been actually making devices smarter and smarter, which which then makes them more of a compute system. Yeah,

    and is that decision one of, primarily com Is it primarily a commercial and an economic decision? Is it a requirements decision for, you know, in order to hit the SLAs and and kind of make it do what you know it's quote supposed to do is having that kind of full, full spectrum control over the over the end, end result,

    I have a very cynical, cynical answer. Go ahead, John,

    actually, I would say it's driven by latency and time, because if I split the process into two, I cannot, necess I may not be able to accomplish the overall operation or task in the shortest amount of time possible. And to your point, Rob, it's absolutely true, the devices are getting smarter, not dumber. Yeah,

    I actually, my cynical answer here is I think it's actually driven by laziness and maintainability. I think that the skill sets required to have a very minimal footprint device and program that device are high and so companies that are building the device are willing to pay a little bit more for increased processor capability. So they can be using a Linux instead of a real time Linux, or they can be using Python programming instead of C programming to do this work. And so the the incremental cost of per unit is offset by the availability to maintain the systems right. But then what we're doing is we're slowly creeping towards systems that are actually IT systems as we get further and further up that chain. Yes, and

    that I would agree with, even to the point of using e sim or intelligent Sims in a sensor or actuator where it can roam and find first available network, even if it's, you know, how do i You don't you don't have to, like you, remember, you used to have to take the SIM card out of the phone and change it. If you change carriers, you don't do that anymore, right? It's all software driven, and because it's software driven, so think of a, an, a sensor at a factory, or an actuator that has an Asim that can find any available network out of 500 that are available to it, and is constantly fed and update so 500 becomes 525, 5580, and so forth over A period of time. So those devices are getting smarter. Same way with cameras, AVRs and AMRs, AVG, all of that stuff, you are definitely moving up the stack,

    but that, but now, now you're talking about, you know, my every camera in my facility being a Linux endpoint. Yes, right now, at some point, you become, wait a second. That's an IT concern, right? It's and there's pros and cons, right? Then you could be like, Okay, wait a second. We can update and manage these devices. We can have them do additional AI processing, because we can add in an AI unit or put a container on it with the Yeah.

    And we also want to do it in memory for speed, well.

    And the vendors can then also sell differentiate. They can sell you a hardware platform, right, like what Tesla is doing. Yes. We're going to sell you a car, and you have to buy the software if you want full driving mode, it's the same car. Sensors are there. Yeah, you're

    buying the car. And then, you know, by analogy, you're buying, you're buying a box with an operating system and a and a four wheels. Yeah, what you put on it, and what you decide to pay to put on it, and to whom you're paying also has a an impact as to who or what has responsibility for ongoing management of those things being put on on top. So I we get back to that. You know, age old question, where do you how do you define the boundary between OT and it, and and, and, in fact, is it, does it come down to which organization you know, has responsibility or accountability? No, I think it's in part, it's, it's a matter of, have I delivered to the customer a platform with some, some clearly built in, comes with, you know, comes with the package capabilities after which there are decisions to be made, question by whom as to what to put on it and how to deploy it and what you are using it for, the metrics by which you evaluate its successful deployment,

    right

    this of race towards just hardware normalization and manufacture. It's, you know, it's, there's not a lot of differentiation in in cameras, if, you know, getting out of the building a camera business so that you can, you know, let your OEM just sell, you know, sell cheap hardware to more people, and then you sell, you know, a software package on top of it. You're, you're specialized in the knowledge on the software device, not building the appeal. I think PLCs are similar switches. I think is a really interesting example, because we've been watching is frustratingly slow, the the generic, you know, open source Linux switch has not, has not had the the fast journey. I would expect, it's inevitable. But, you know, it's been incredibly hard for those vendors to make any headway because limitations of the devices.

    Well, it's not just the limitations of the devices, it's the limitations of the environment in terms of connection and coms power, okay, and the what is sometimes called the rugged environment, factory environments are semi rugged. They're not fully repaired, right? So there's so many things that you have to take into consideration that, and there's not a lot of standards around them that you can go and say, Oh, the standard is this. So, you know, as long as I make it to this level, I'm good. It's it's remarkably complex. And I think as we move towards V to X vehicle to anywhere, like in automotive, a lot of that technology spills over into other kinds of manufacturing and other kinds of goods, but it's towards it's not just the software defined vehicle, it's the software defined factory. And so that barrier and that boundary between OT and it is really a big challenge. But to your point about the cameras, so there are some things that are being worked on, MIT labs and a bunch of other places around the world where the camera is simply a smart lens and a battery to make it move. Where you where you physically need a power source to do something, where you physically need a mechanical device, and, you know, to make something move. Can you? Can you attenuate a switch in a way that can literally flip it off and on through programmatics that could be difficult, because it requires changing things like a you know signals right as analog to digital first and then, how does that digital get? Interpret? And, you know, so forth and so on. It's, it's akin to, why do we have two sets of hardware plugs for internet and for electricity, when we should actually only have one.

    We should be able to deliver Internet services through our electrical outlets and conversely, the same. And I know places where they're working on both of those,

    the different they're carrying different frequencies. Yeah,

    yeah. A certain amount of physics you just can't change. Yeah,

    going back to your reference about switches and networks in general, networks, network equipment, for the most part, almost by definition, where enterprise goes for it is based on,

    basically relying on the vendor to say, this is what it will do. This is how you will do it. We guarantee that the degree we guarantee it, this is how it will react to, you know, configurations and instructions given to it. The idea of you having a, you know, Linux based kind of open source switch where somebody else has degrees of freedom, if you want to think of it that way, about what goes on it. You know, as soon as you do that, the the operations guys, the network guys, go, you know, completely ballistic going, we have no control over quality we, you know, you're holding us accountable, but we're not responsible because somebody else has just loaded something onto this switch. And so how it's it's a different kind of separation of

    concern dilemma, the whole right? In some ways, I think we underestimate the miracle that Microsoft pulled off in being able to have a unified operating system across multiple hardware vendors, right? It's right, yes, it's been incredibly hard to do Red Hat sort of replicates that across. It's with a lot of investment, but, you know, a lot, no, you know, I don't see any other Linux doing that. And back in the day, we used to have, you know, Unix was vendor specific, you know, OS for this exact reason, and the switches are still right. Yeah, you're right. If my switch is having trouble, and somebody's like, well, which version of the OS are you running on this switch against which version of my hardware? And it's, is it my hardware, or is it a patch in the whatever and the, you know, if the users are like the

    hardware is, is it the hardware? Is it the operating system, or is it that wacky piece of of, you know, application, you know, software scripting, that somebody threw in there, that just completely, you know, threw the whole thing into a we

    deal with this all the time on the server side, in the data center, it's like, you know, not happily, it's incredibly complex. The variance in one chip set changes what the support is. And the vendors are like, you know, they throw up their hands in a lot of ways, as much as anything else.

    In Rich's example, it could be that there's a little crack in the piece of copper stuck in the ceiling from 25 years ago, but, but you know the and this is where I have a big question for for everybody, which is way back in the day, there used to be two protocols, one called Tumbleweed and the other called RSVP, and the delivery of data over Tumbleweed was a fire twice drop the one that doesn't get delivered after the first one gets delivered. Right for messaging, and it wasn't just a messaging protocol. It wasn't part of MQ, TTG or SVP or tumble. We'd rather was, I can throw data on the pipe. I throw it. Ways, the stream that arrives first auto magically drops the one that doesn't Okay, so only one set goes through. And this was designed to be used for things like prioritizing channels in the network, so that you always had highly available, or, sorry, available bandwidth at like, surge pricing kind of thing, um, kind of an Uber model for it, and you always had that stuff available to me. Those were fantastic, um, innovations, because it took a stack of data being used by an enterprise and allowed you to prioritize ERP over email or production over you know something else. We haven't seen those kinds of innovations, but those are the kinds of things that would truly impact our view of edge. That's where the rubber really hits the road. Why can't you do that stuff anymore? Like they they hit a heyday, and then all of a sudden, they dropped off the planet. Nobody uses them. They haven't been seen or heard of in a really long time. But now, even with 5g or 6g we're going to need that. We have to start prioritizing, because in memory and and coms are what's going to dictate the future of the edge. But

    if you, if you go back to kind of the streaming, the means of delivering streaming data, yeah, to or from the edge, right? You know, you have, you have cues, you have priorities. It's a matter of making use of them. So what's best practice? Or what do you demand of any application sitting on top, you know, at either edge, either end of a, of a Kafka right? A Kafka message bus, you know, what are the, what are the, you know, what are the conventions? What are the established standards for prioritization, and how do you declare them? I mean, this was the, you know, bumping up a level and, but also going way back. This is, you know what made the difference when, when companies started delivering truly low latency networking to the financial community, there was prioritization and and and it was declared, as you know, this is how you use it. You decide, here's how you use it there and that that actually, you know, that worked pretty well for and it was done without one company owning the entire, the entire industry. So it's doable, but it's it's kind of strange, right? I get your point.

    It is kind of strange and and to me, the impact on the edge means we either redefine the edge, which I think could be problematic, or there's an intermediary like I remember earlier this year, I had a client with a question about this, and I was trying to figure out how you could define like for a federated or distributed architecture, and this also to the point of the edge and the devices and everything else. What is, you know, here's one cluster, here's another cluster, if I bring them, if I have something that's a prosumer, meaning that produces data, as opposed to the view of the consumer, which we always take, which is the consumer of the data. If I move my prosumer slightly closer, but not next to my consumer, can I say it's like, you know, it's like a fence, and I'm building closer and closer to each pick pick it in the fence. If it's near, near or farther away. Can I make a determination based on the the distance between the pickets, type of thing, right? And what's a prosumer and how it's priorit prioritized versus a consumer. And this, I think, needs to come into the edge discussion, because when you talk about edge to your point, about where do you separate the software from the device? That too would be part of that same discussion. How close? How far is it in? Your edge or a far edge, and how do you reallocate those kinds of things? I don't have an answer. I can only speculate about where things might go, but I would say we're going to start seeing a lot of picket fences, and I don't need to keep intruders out edges. Edges.

    I miss. I completely the picket fences.

    Well, pick it, you know, picket fences like intrusion alerts, that's security kind of parameters and I, and I'm talking about like, you know, moving things closer together, or a greater population of edge servers and or intermediary devices between two edges, let's say.

    But I mean, this is the thing every time we It feels to me like this idea of offloading processing anything sort of just sort of met orchestration and management. The is, is been sort of a, you know, we keep waiting for it to happen and it doesn't it like this is, this is the interesting thing about the check in to me is that we keep swinging back towards we're not offloading processing to low latency, which is part of what the whole idea of of edge or near edge was going to be. We're making the devices smarter and smarter, but we're so terrified of the, you know, open having an open ecosystem with it, or double purposing a device that that that's a fall down. From that perspective, it feels like, you know, we haven't found whatever it is. As these devices get more capable and they become more software defined, I think we're all nodding vigorously on that the software management aspects aren't ready to be split off, or if they are getting, you know, they're they, it's very difficult to separate them into a generic platform

    from that perspective, yeah, could you say the same pretty much, you know, could you say the same thing about the the progression of technologies and and respond organizational responsibilities in networking. I mean, yeah, if ever you relied on a group to take responsibility. Had full on responsibility for orchestration and and the choreography that goes on in in networks and and configuring networks, because we're so dependent upon them and dependent on their Yeah, predictability, and that's why you know you lump it all in one place, keep it there, because as soon as you have a have a split in responsibility, you've just opened up stuff. You've opened up the the the the possibility of finger pointing and, and, yeah, it's constrict. It's constraining. And it's restrictive. It's a fear. It's it's fear based,

    actually, the network switch piece. Go ahead. Well, you were gonna, I

    was an ass. I mean, do you think with, you know, your talk about how we read fire on the edge. I mean, the the security issues that we're going to face as we open up the edge. I mean, I don't know that, that we're really there yet to even deal with that level of sprawl, to manage those security issues. And one of the things I was, you know, talked to someone yesterday about AI and cyber security, what people's talking to like, we don't even know how to identify a an AI cyber security attack, much less deal with some of those new threats coming. And so there's also the issue of if, as we extend the it boundary, now, we extend the this cyber, cyber security boundary, and responsible is a massive issue, and we're already struggling with a lot of our our industrial IoT security issues with the PLCs, which are, you know, very easy to to hack. And then we exponentially increase the attack surface as we expand that, unless there is one vendor that kind of creates a unified platform. Then most, most organizations, don't have maturity, much less to deal with the current boundary, much less a redefine boundary, it seems like,

    but, but you know what? I'm I don't disagree with you will at all. And I think that there are. Are massive changes coming to that, because manufacturing ranks number one out of any other industry as the target for cyber threat and cyber attack. But to that point, the words orchestration and choreography lead me to composability. Are we getting to the point where the modularity that we've created, rather than create the Kluge, leads us more towards fully autonomous, composable? You know, here's a piece of red Lego, here's a piece of white Lego. Here's a piece of blue. Put them together however you want. They will attach. And it's the attachment of those modules, which I call the composability, which I think will take over from, for lack of a better way of putting this rich orchestration

    overall, all based on, on a fundamental agreement as to the standard of interoperability and interconnect without those

    but, but I think, I think that you're you answered to Me. You answered the question I was going to ask you already, which is, aren't those things just going to be handled by the orchestration right? It's instead of, instead of breaking the device of open, right? Then, you know What? What? What? I think the simpler answer and the better silo answer is, well, we'll just add an orchestration layer and orchestrate these things together, rather than composite, rather than compose them or decompose them. It's too easy, right to just add orchestration, even if it's, you know, it scales well enough. There's enough processing power. We're just going to deal with these things as a fleets of locked of more locked units, because it's it guarantees the

    it guarantees. And the other thing that you're doing when you when you go that route, is you're basically putting governors and and constraints on, on the on the piece, parts that are being orchestrated. Without that, you can't have the orchestration. So, you know, if we were talking about, you know, robots in a, in a Manu, in a in a warehouse, you know, there will be speed governors. You can't go faster than, you know, 15 miles per hour on, you know, areas like this, or, you know, whatever the whatever the rules are, and those have to be well understood, well defined, and then agreed to,

    well, there's good some of those are based On. And this gets to the back to the device, right? The AMRs, many of them have, you know, LiDAR, or some other capability built in that's e, prom based. And if it senses that there's a human anywhere within a vicinity of a radius of x feet, it stops, and it has it has to stop, and it has to wait. What you can't control. And this is where people are starting. What a business opportunity I can remotely control your Amr, so it's not going to crash into a shelf, cause product to fall on the ground and shatter and possibly injure a human being. So insurance companies are actually writing writers on business policies for such an event. And because of that, people are trying to have an early intervention system where the AMR or the robot or the palletizer, whatever you want to, you know, big, small, whatever, are now being insured so that if they actually do cause an incident or do injure a human being, it's a special situation. So the more of them that you have, the more likely the frequency is going to be of a human getting injured, the higher the price of the policy. And so there's companies now, a couple of startups that I know of that are trying to observe and into early do early intervention to prevent a robot from actually having an accident, because the amount it would cost you on your insurance if they do have an accident with a human being involved is 10 times greater than the cost of that intervention.

    So what does this do extend the, you know, housing off three laws of robotics to laundry list like this? I mean, what do. Yeah,

    no, actually, what it does is it sends a Bluetooth signal to interrupt the the movement, the programmatic movement of the AMR to stop it like it, it's like cutting the power to it or taking the battery out of a handheld device.

    So it's a prioritized, correct, safe, safe shutdown, safe or safe quiescence?

    Yes, safe quiescent, but you would probably have to re image the bot and its programming, so you'd have to have a backup for it, which means that you'd have metrics around measuring it, and to your point, will all of the security that's configured for the fleet would also have to be,

    yeah, yeah. Who would you trust at this point to put that whole thing together, and then who would you trust to actually operate it? Who would want that job? Because for a lot of folks, that would be like living with a gun to your head. It's like, you know, the air controllers job that is, you know, yeah, 10, 10x worse than what, what it is today. It's a that's a that's a tough one.

    First of all, I would trust Rob and you who needs it. I it well, I don't know you well, so I knew it all. So I can't say that I trust you. But to your point of cyber threat, you know, points taken, brownie points made. But certainly, there are companies with massive fleets of these things running around that could be very small or very large. And it really is a concern. And this is going to be 2025 2026 you're going to see people and startups all over the place doing this. And yes, it is exactly like an air controller shop. And no, I

    think what this is reinforcing to me, and this isn't surprising, but is a good reinforcement, is orchestration and management of increasingly smart devices is the place where this is going to happen, right? We're not going to be making the devices, IT devices, we're going to be adding in it, oversight, management and orchestration, and then the robustness and quality of that and its ability to do more and more is actually right. That's right that the challenge becomes it's not the devices become IT devices, it's it is able to interact as these devices become more like it devices, then we are going to manage and orchestrate them better.

    You're late, you're putting in layers of abstraction that, and that's the place where the where the advances are and the capabilities are going to be expanded. It then gets to a point where, all right, you know, what does the industry do to either standardize it or who ends up by dint of being first in or buying the market? Control it. Control the whole notion of how these things are orchestrated.

    I take it all the way back to the OSI and say, do we need another layer, that

    intermediate layer? Yeah, and that was just so damned effective. So that.

    But it makes you think, because I'm beginning to realize that it and OT enterprises have a technology estate, and you have different of managers, orchestrators and choreographers, and primarily choreographers, because, to me, orchestrate is pejorative, prescriptive. You can't always go that way. You need to have some level of creativity around situations just to accomplish a goal. So I go more on the choreograph and compose side of the fence, then orchestrate and manage. That's

    like choreographing a post recently I need to wrap up because I'm on schedule. That's that's fine, but I gotta go get my COVID, COVID Shut don't want to be

    late for that. Oh no.

    All right, everybody, next week. Talking about Brian, yes.

    Wow. What a great conversation. I really feel like we, in a way, broke through some of our conversations about the edge, and really talked about what is, what will be powering transformation at the edge by shedding some of our aspirational it will become an it. It won't be ot it won't be siloed assumptions. It's exactly the type of thing I count on the cloud 2030 group to do. That's why I love these conversations. I hope you love them too. You clearly do. If you are listening to me now, please consider joining us at the 2030 dot cloud, read the books, be part of our book club, join in the conversations, ask good questions. Really want to hear from you. Thanks. Thank you for listening to the cloud 2030 podcast. It is sponsored by RackN, where we are really working to build a community of people who are using and thinking about infrastructure differently, because that's what RackN does. We write software that helps put operators back in control of distributed infrastructure, really thinking about how things should be run, and building software that makes that possible. If this is interesting to you, please try out the software. We would love to get your opinion and hear how you think this could transform infrastructure more broadly, or just keep enjoying the podcast and coming to the discussions and laying out your thoughts and how you see the future unfolding. It's all part of building a better infrastructure operations, community, thank you. Applause.