Hello, I'm Rob Hirschfeld, CEO and co founder of RackN and your host for the cloud 2030 podcast. This DevOps Lunch and Learn session is about continuous infrastructure automation. So the idea that when we build out applications on infrastructure, we don't treat them as a static deployment. But it's something that is constantly evolving and growing and changing. There's a lot of technology and challenge in building that. And we firmly believe that this is the right path. This is the futures what we have to do. But the path to get there is challenging. There's a lot of components that have to be considered everything from artificial intelligence or machine learning, to how to manage and control and standardize the automation that does all that work. And so we really dig in on these topics, I think you'll really enjoy our discussion.
I think that a lot of edge conversations don't really talk enough about how the network works with it. Yeah, exactly. As a telecom person, I am acutely aware of just how stitch together it is.
That's beautiful. Yes. Okay.
So I'll be happy to talk about some of the, you know, and,
and I would love to also here the telecom perspective on the whole mesh thing, because you know, a lot of these you get industrial IoT coming from the bottom up and you got the telcos coming from the top down, and seeing what they both think about a mesh and things like that, and seeing where they actually coincide and where they clash. Yeah, I like to know that kind of stuff. It's really useful. It's cold halls.
Yeah, they are definitely holding cold.
Alright, so I'm gonna look at barriers to barriers to adoption or various to. And so that's it, that is enough to frame out an edge networking conversation.
Oh, that easily? Oh, yeah. You could spend an entire afternoon on that conversation.
Maybe one day we will with since we do a con conference, like once or twice a year. Yeah.
There's one coming up for open infrastructure. A bunch of these topics, alright. But I'm gonna transition this over to the topic of the day, if we can get rolling on continuous infrastructure automation, which is a term near and dear to my heart, although I don't know that everybody knows what it means. So I think part of what we want to do is talk about about that. I'm going back to some of what I've seen from Gartner. Where are they? They are they define infrastructure, pipeline and continuous infrastructure pipelines. But the idea being, we're building automation that is constantly evaluating the infrastructure, right. So it's, it's in a state where, as things move, and drift, change, or get ups in some cases here, where you're making, oh, I want I want 10 machines instead of five, those changes are going to be automatically incorporated and then propagated through the system are that's sort of my understanding as a starting point. But I'd love to hear other people's what the phrase means other people.
So I have a trouble with the continuous part of it. So you know, I understand ci CD, right? And ci CD in the in the cloud world makes a whole lot of sense. Because the infrastructure actually doesn't change very often in the cloud world. underneath it, you know, you're constantly changing the apps and you're constantly adding, you know, VMs or containers are whatever, you know, and that's all elastic, but there's actually a fair amount of consistency to the platform that's underneath. You know, Amazon doesn't go around swapping out hardware all the time. And in the in the hypervisor, you know, it doesn't change very often. You know, they tweak it so, you know, what does it mean to you Be able to change the infrastructure all the time, if that's even possible.
Part of my assumption here is that this is all that includes cloud. It's not just a physical layer.
Okay? Yeah, you have to write because the physical layer can't change, for obvious reasons.
So do your question, but it's not so much about changing it all the time. And that's this is where the third item in the list reconciliation comes, becomes more important is that you have a certain state that you want to maintain. But you know that your infrastructure is not infallible. You hypervisors die on some of you VMs might need to be recreated. Or you have use preemptable instances or something like that. And you want to maintain a certain baseline. So that's, that's where the continuous part comes into place, is that you define your declares you say, I want to have this infrastructure as a baseline. And if if reality drifts from what I want to have, make it happen so that it matches again. So it's, it's more about amending rot to a certain degree.
Does continuous infrastructure automation include the elasticity elasticity function?
That's one of the goals.
Okay, so that's so that makes sense. Right? Yeah.
So one of the big things that going back to a previous week's conversation, I would think would be important for continuous infrastructure automation is updating and replacing certificates. Yes, not only isn't as needed, but an emergency basis.
Well, don't you want that proactively? I mean, how many times have you know, some, some application come crashing down on middle of the night? Because somebody forgot to? You know, yes,
so you want the the s as needed, like for expiration stuff. But you also want the emergency when something gets compromised, and everything has to change now. But the regular one, both both checking? And actually, it's not just replacing certificates, but generating new certificates. And so that's, for me, that's a big ish issue with preventing drifting, because that, that would prevent increasing fragility, or, as Klaus would say, bit rot.
And, yeah, I actually started to froggies some very, very good example of things that could be automated up but in many times are not. Because certs don't usually don't change all that often. Like you get a new cert every year on unless you're using Acme In which case, you're already drinking the Delta nation Kool Aid. So, so yeah. When it when it comes around to, to automating things. Typically, what you see is he, you start automating the pain points. And but when it comes to cert management, if it's once a year, it is it might be, you might say to yourself, well, I'll do it manually this time. Next time, I'll for sure automate that. And then next time, there's some other priorities, so you end up not doing it.
Yeah. Oh, yeah. Well, some in some cases, my my experience with this is that the tool chains that people are using are not designed to be hooked together in a in an ongoing basis. Right. It's, you know, I, but let me step back because one of the things you were saying I'm gonna stop my share so I can actually easily angry who's talking. Um, the, the, the thing that I'm seeing here is I think there's a stance and automation stance, which is like manuals bad anyway, automated is okay. But if you're not, you know, continually re proving all of your infrastructure, then you actually are at a disadvantage right the systems especially with immutable deployments and rolling upgrades, they should be in a In a constantly rolling state for the for your your infrastructure,
yep. And that's the that's the next part also that I consider chaos engineering part off of automation. Because what you're what you're testing there is not or what you're entering, there's not only that your infrastructure matches what you declare, but also that what you declare, is still compatible with the API's that you're using. Because that is an option on an actual problem as well, like terraform. Like before version one used to be pretty bad at this. You could write something for terraform and say version zero point 12. On that module is updated to only work with zero point 13 on until you actually redeploy it, you don't know about it.
Right?
After and then every now
and then it rewrites your infrastructure.
Yeah, not a good idea.
Yeah, well, that's but that, to me, that wasn't thought, even though it's designed to be repeated. apply, apply, apply, apply it, it wasn't really designed as a continuous infrastructure automation system. In that perspective, because it's it's not just about provisioning, it's actually about coordinating. And this this is where the orchestration question comes in to me on the list, like, Alright, if I'm going to do a continuous infrastructure automation system, just rolling out machines doesn't help you, you actually do want to drain them. perceivably maybe, maybe I'm overthinking it, brain now.
No, you're not, you're not. So let me give you some examples. So first of all, probably the closest we have to continuous automation is security, right? Security, you know, firewalls or whatever, you know, are constantly updating, right? anti malware or whatever, you know, it's it's continuous, right? And it's seamless, right? You just, it just happens in the background. And much as we would like it to happen like that in the network side, it doesn't actually happen in the network side. And that actually is a problem, right? Because it introduces security, security vulnerabilities, zero day exploits, etc.
I would actually add an amendment to that. And I would say, reporting insecurity has been automated. acting on those reports is still very much a work in progress.
Yeah, I was thinking of the anti malware stuff, which you know, every time you turn on your computer, it's just kind of running in the background. Yeah. Occasionally, it asks you to reboot, but usually just this thing. But you're right. firewalls are not firewalls act more like routers in that, in that regard, that it's not, it's not as automated as automate.
Or you might be running a seam on you scan your network on you, on you and scan your machines and what's deployed, but software upgrades are still I like tossing the ball over defense to do operations from security, in many cases. firewall management, yeah, there's, it's, it's a mix, I find, I guess, in some cases, like traffic, there's so machine learning there that automatically blocks traffic and other cases, just fire off, fires off on alert. There's still a and I will say like, it's not invalid, but but there's still a big desire in the community to to have human supervision, day to day systems, which of course flies in the in the way of automation. Right. It's hard to do basic Right. Yeah. Yeah, it's hard to do do give up control. Um, I experienced that when I started working with coordinators. Like it, it it I was actively hostile to it initially. And then and then something funny clicked like, okay, I can give up this control. But, and that is fine, because there are other things that I control that are Much closer to that what I want to be involved with?
Is there to me there's a degree of transparency. And in that there's there's a transparency contract there like AI, where if you don't know, how the algorithms behaving at all, it's much harder to trust that it's working correctly. Yeah,
clear boundaries on domain specific automation are much easier to, to swallow than blackbox automation, right?
Well, on that topic of AI, actually, I'm working right now on some AI stuff. And, in fact, there's way too much reliance on blackbox algorithms without understanding what they're actually doing. And and, you know, it's garbage in garbage out, right, and you're here. So you guys start simple at the start, where, you know, the, the answer, and if the AI algorithm does not return it, then there's something wrong with the AI algorithm. But
I think, go ahead
and say, I say, part of the problem is, with the AI is that the we attempted to commercialize the AI or or turn it into a product, the algorithm is usually your proprietary design. So making it open, becomes difficult. Not impossible, I would say, but difficult.
Um, well, so you probably don't know this, but I've started working on a project with aniket, called expose, which is AI, some AI work. And, um, you know, the, the big gap right now is a valid data set.
Yes, yes. That's a big issue. Well,
I agree the algorithm could be fine. But if you're if your training data is, is biased, you end up with bad results.
And that that is known. I mean, there's there's no MIT published some really interesting things not not on a on in networking AI, but it applies to face recognition. Oh, yeah. Yeah. So it's gonna apply it in networking, too.
It's it's a common issue across all fields have AI. That does largely also why there's, at least in academia, so many AI competitions, where they essentially train a new set every year to do retest their their algorithms and rank them.
Do is AI necessary for the continuous infrastructure automation piece? Like? I hope Actually, I guess I'm hoping not.
Oh, I think it's rather optimization, then something that would be necessary for automation? Yeah. Yeah,
I would, I would say, AI is not necessary. There might be border case borderline cases where it might be necessary. Um, it's arguable, as to whether are debatable as to whether the AI even provides much benefit as we were talking before. It is much, much easier to adopt an automation system, if it's not a black box on, unfortunately, AI is to demand it as a black box. Even if it's, if it's if it's an open algorithm, it might be too complex to for, for someone to review, independently,
I would I the way I've usually approached AI and automation together is like, you need to have like, reliable automation paths through your system, right? You need to know that this, you run this automation, it produces this result, it doesn't reliably, you know, rotate the certificates, it can do all that work. That is not to me AI work. That is not good old ops work. And then the AI comes in to say, I need to know, like, analyze the cluster and tell me if I'm under resource or over resource or is there a pattern of behavior that I need to see and then and then ideally, you Then plumbing. Standard, great. Oh, yeah, here is the switch. And I'm just eliminating the human from the switch this proper safeguards about taking the actions,
I wouldn't even call that AI. I would call it machine learning machine learning,
right? Well, we get it gets great what's what's machine learning what's AI, but machine learning
is a, it's just that it's established enough that it no longer has the moniker the, from way back when anything is AI until it's actually established enough that it specific nation AI gets its own name and gets used with that name. Like AI expert systems was AI until they became expert systems, machine learning was AI until it became machine learning,
I would put a deal away around machine learning is a is a building block of AI.
So yeah, it's a precursor or building block.
Right. And we don't have AI today, either, though, it's just that what we're calling AI is yet another building block. It's neural networks at this moment? Well, it depends on neural net learning.
I think of the promise that we're looking at at AI as a specific technology. And that's not what it is AI, right, in the cosmic sense, is the application of technology of certain technologies, which doesn't need to be specific technologies. But the application of those towards a goal of, again, artificial intelligence. It is, it's not, it's not a particular tool. Um, but unfortunately, that is that is about what popular is, is considered AI. I mean, it doesn't even need to be general AI, it can be very domain specific. But it's more AI is more the algorithm Done, done the done the method of implementing it, we don't need a DevOps
tool that can pass the Turing test, right. So, but I see how, how are you going to, you know, with the projections of the amount of devices that are going to be managed, how we can't do it without ml? Right. So like, in your
class, I think that automation doesn't need AI. But AI needs automation? Yes,
yes, a good way to say it.
Um, and AI, my experience with AI and I worked on, on speech recognition, years ago, and everybody thought we'd be able to use neural nets and blah, blah, blah, and intelligent algorithms, etc. The speed, the computer speech recognition problem was solved, as I call brute force. They just recorded hundreds of 1000s of hours of people talking and then analyzed.
But I can right away see the problem with that.
That's how well the app could be, but that's how they solve language. I mean, that's how Google Translate works. They just through millions of documents at it.
And that's exactly I'm just MIT tech review just published another article about how speech recognition doesn't encompass the black world. So essentially, south of the equator, and languages and countries in those areas. And that's because Google didn't have any need to solve it for those guys at the time. They solved it.
Yeah, so the Translate, well, speech recognition. I assume it's approved now, but when I was working on it, this is back in the early 90s. Um, it didn't even work for women. Because speech that was recorded was men, because it was the engineers.
Yep. Yeah. And so it was educated men.
That's correct. Although, the VP at the company, Steve Rothman was heartbroken by the fact that he had a very, very thick Bronx accent and it did not understand him at all.
Yes. And that's actually a in some ways a symptom of education is that you will find that as folks go up the education tree. Their accents in the US at least, tend to Reduce the higher up on the tree they go in. So, accented language, dialectic language goes away, the higher the more education there is, and so that the data set from engineers becomes smaller and smaller. And in some ways it doesn't get the outliers that aren't outliers or actually go localize high points.
So I have to get my I have to get my masters to get rid of my Boston accent. It's
I've been working hard on my Philadelphia accent. Now. I've been living in the New England area for long, far longer than I lived in Philadelphia.
But But how? How's it when you go to Philadelphia? Do they? They say, Oh, you still have a Philly accent? No, they say, Listen to that New England accent you've got I know they do.
They think I sound funny.
The going going back to speech recognition. Just wanted to bring up before we drifted much further from that. There's also the matter of speech recognition, accuracy in general versus real time speech recognition. Yeah. I mean, you don't even just take, like modern example, go to YouTube and turn on closed captions, and see what it is passable. So I will take like, maybe 80% accurate, but it definitely falter suspect especially when when you when you cover like domain specific terms that are not part of the of the general vocabulary, it does a very good guess. And when you when you look at what it actually produces it there's certainly a phonetic similarity. It's really just not the right word, however, and that's the big problem with speech recognition and natural language processing in general carbonate
and sip falls down on harmony. Oh, yeah. That's not the way to go. But anyhow, back
to continuous. I mean, some of some of what we, you know, going back from AI, I think, you know, and this, to me is the brilliance of Kubernetes is the reconciliation pattern. It's, it's really, really simple. From that perspective, it's like, this is my desired state. I have simple rules to check monitor and update from that.
Yeah, and it and it also gives the user control over over the boundary conditions. But like, you can define your health checks, you can define you your scaling rules was like, okay, like, like a if the, if the pods are our state with the with the deployment definition? What are my rolling release? procedures? Or do I want to do a B or Canary that I have to the ability to, to specify that in a pluggable architecture?
I think you just identified something I hadn't thought of, in in this whole model, which includes some of these some standardization around a health check or a drain instruction or a, you know, some some type of, you know, the included behavior that I think becomes necessary for us to go down this path faster. is to be able to say, Alright, yes, every time I need to do these systems, but you know, that was saying yes, on the drain, which I think makes a ton of sense. But you need to be able to say alright, what if my drain isn't working, you know, you have to have these layers have pre and post conditions on top the operations to make it all work and then coordinate it with other the other work that's going on.
This is
also where we're where much of the debate about Kieran is calm strongly. That is the people who have embraced Korean is under half their workload that is curious, friendly, and under people who have legacy workloads, that they have very specific setups, because that is what they've built up in order to maintain the infrastructure on which is just not compatible with coordinators. And that's where a word clash comes from. It's not a silver bullet, because he can't just lift and shift onto your container Infrastructure as Code decision from what you have in all cases. But it does give a very clear set of rules saying like, if you design your application in this way, and deploy it on Kubernetes,
then you will have a good talk.
That makes me think like the Chef and Puppet patterns of like the continual. Because they would do this, they would do not what I would consider continuous infrastructure automation, because it's really continuous configuration or reasserting configuration states over and over again. Yeah, I guess I mean, that's, that's different to me. It doesn't it doesn't, doesn't feel the same. And maybe it should. And you were you were talking about, like, Kubernetes, mapping to Kubernetes? In some ways mapping to this pattern that we're gonna have to design applications that are continuous automate automation friendly.
Yeah, absolutely. If application design is not fun from that to done, it becomes much more difficult. Anything that's stateful? I mean, you can automate it. But you, you it's an order of magnitude more difficult, at least. And yeah, I mean, I would consider, like the Chef and Puppet, the configuration of automation to be precursors of infrastructure automation. I don't think we have proper infrastructure automation yet. It I would like to see it on something like Chef or Puppet for infrastructure. What we have right now is on demand reconciliation when we when we use terraform, or pulumi, or something like that. But what what, what the thing we're missing is the continuous aspects, something that that checks for dress on has to clear rules for amending it, or if if if something falls outside of the boundary of the rules, so they can handle that at least I learned about it.
That would that be beyond like, you know, I change a profile I up up or down a resource count? And then it goes and does that. You're saying part of it part of what we'd want would be the Chef or Puppet ask, you know, review the environment and enforce you know, in conformance enforce.
Yes. drift management.
Risk Management. Yeah. Yeah. Which is what we prevent drifting on this list.
Yeah, yeah. Well, drift management is definitely a big problem. And I will tell you, it's particularly in spades, in edge environments.
Oh, I believe that
is not just a matter of, of difference, right. And the sense of infrastructure that you want has gone away. But there's also infrastructure you used to want and don't need anymore, and it's still there consuming resources
would be helpful. I like where this is, like, we're like, where this is going. Part of what I've assumed from a drift management perspective, is the mute is an immutability story where what you're doing is not fixing things that are drifting, but you're just always rolling the environment. Right? It's right that that would be part of drift management, would be literally coming back through and saying, you know, I'm just gonna keep re provisioning these systems or resetting them and rejoining the cluster on a regular basis.
I would say that, particularly with the with the cloud, it's a it is no longer beneficial. To look at infrastructure as immutable. It used to be when you were run by running just one or a handful of servers in your own data center. Because the the time to bring new infrastructure in into your environment was months, or, or years, depending on your budget. So but now with cloud, we we've we've reduced the size of what we consider infrastructure. So instead of dealing with on trying to take real word to get a
disaggregated cost, that's disaggregation. Right? That's the whole purpose of this hardware, and software. disaggregation. Yeah, cheers. And in the data center, I've always treated that the unit within the data center and and Rob like got this from you is it should be the rack. If the rack disappears, everything still runs. Because you just roll it a new rack.
Yeah. Yeah, it's a matter of scale, right? Like it. I mean, when, what when, when you, when you've run a small data center, you unit might be the VMs? Because Because your brackets immutable, when you run a data center the size of Amazon, of course, the rack is rack is your, your most atomic component, because it should be capital at that point.
Right? That's exactly right. Yeah, the edge. I mean, I, I, you know, I treat the edge for the most part, that unit is going to be a is a is a box. Um, and, and but in the core, you know, in AWS, it's definitely the rack.
With, would you on an edge site, say that you would then have two or three? Like, do you need resiliency or redundancy to allow this type of process? Or? We typically do Yeah, um, customers typically bought, you know, with our universal CPE most customers get to
for resiliency,
then then that that gives you the bandwidth, the acity to do upgrade and swap.
Right? That's exactly right. And frequently, it's associated with certain parents circuits to so might be adverse circuits. So if one of the circuits goes down, you know, would
would that then become the upgrade or patch story? Because it's, it's absolutely, yeah, so So yeah, you're literally just doing a reroll of a unit. But then then you ever done in C. So if something goes wrong, you have a foot in the place,
right. And we typically have what we call t lock, which is basically a cable connecting the two together.
But then, then you have to have logic that says, the thing didn't work here, don't block out the environment, we're back to the health check. scenario. It's
like, yeah, we have to have rollbacks. Yeah, totally. And we do. Typically, we do a snapshot before we do an upgrade, do the upgrade. And, you know, if, then we can, if if the upgrade fails, you can roll back, but you still have, and you do one box at a time.
And then how do you test the other thing that I always think of with this is that this also means that we're going to have to build API drift into the systems that we're doing these continuous provisions on, right? So, right, you're like, Oh, I'm, I've got a, you know, one of my cluster, you know, ultimately, 100%, but it's at version one, you're rolling forward on that, that version two is coming at the same, you know, coming right behind that, conceivably, and then version three, and you're literally moving, you've got this API drift for this integration drift of?
Well, I remember this being a big issue with OpenStack. And, you know, there's still, I'm sure tons of OpenStack environments out there that are stuck at one version or another because because it's just an enormous lift, to do a rolling upgrade,
though, and that's the has to be network back to the whole concept of you better build from this, this from the start.
Yeah. And I'm just gonna add also that I don't really fault the this automation, still missing for infrastructure, because it's, it's a new problem, really. It's only within the past decade that we started to, to make our infrastructure more mutable, or really more ephemeral. So it's something that's gone, does see evolving over the next decade. Um, but yeah, it's a it's a new problem.
Yeah, definitely. And the people that are working on solving it are in specific places because of the barrier to entry to be in a situation where it matters. When you expand on that, I'm not like, it's not like Linux drivers, right? where, you know, you can buy Raspberry Pi for 10 bucks and work on the driver. But if you're talking about these cascading systems, really how do you how do you home lack that or you can have 100,000 people that are interested in that problem.
Yeah, it's a, unless you're in, in a system that has a huge footprint, you're not going to have insight into the issues, because you can, you can prepare for anything you can think of. But it's all that stuff you can't think of that happens in the very large systems that make the difference. And so it's a small community when it comes down to that. And each of these large cloud providers have grown up in mostly isolation, and have their own processes and their own hand rolled automation and whatnot. So there are no standards yet.
Oh, my goodness, that is it's to me part of the this thing would be having more reusable Automation Components. Right? Because, yep. Right. I mean, we deal with all the time that everybody automates their physical infrastructure, it's what comes to mind here. But it's, it's pretty much the same servers, it's pretty much the same process. It's the, the 99.7%, the same. Actually a problem in general, it's it we and we've had conversations about this is so hard to patch and test automation itself from like this, like, this isn't even the code like, how do you know that the automation that you you patch the automation? And now it's going to, you know, what's the old stuff gonna do? What's the new stuff gonna do? How do you know that you didn't break something in the previous
week, we have similar topic about this last year about the CI for ci, ci for ci was when when you when you update your your pipelines? How do you ensure that you don't break things?
Yeah, we we had we have, I mean, remember, we wrote a lot of our own orchestration tools, because when we, when we built our products, the orchestration tools didn't exist. And I will tell you, there was some super painful upgrades involving having to take boxes down, and having people go out to the edge to load it up onto the box, we couldn't even do
Been there, done that out in the middle of a zero degree temperature, in a car, truck port, on the truck, downloading software, onto the system, spending, you know, multiple nights in the cold and frozen. And in the heat and humidity of Georgia in the summer,
my condolences.
We're still we're still to the point where especially for edge, whether it's a cloud or not, they're still you have to go in there and and touch it because anything fails, especially in the network. And the only way to get there is personally
to I mean, from those experiences, right? How is there a thread that causes stuff like that to break? Like, I mean, networking clearly, like it's so easy to break networking? mean, how do we how to defending because I can easily say let me let me be trying to be specific, I can easily see thinking that you're fixing the automation that builds the environment or patches, the environment, rolling that change out, and then having that up the automation, break the systems because you change the automation.
Yeah, well, one hopes that we catch that sort of thing. And, you know, during testing, but which we have, but you know, it's the upgrade process of the automation that caused us to Oh, the only way to get to the system is remember, this is the orchestrator sham. That's, you know, the only way to do it was to physically touch the box, which is very expensive.
And, you know,
it's a hard problem.
I mean, it's, it's not something that that even Google has solved or like that they've had their their share of outages due to automation.
Yeah, but at least they're in a data center class.
I know that Microsoft is sunk. The containers, shipping containers. down into like, deep waters and running remotely from the deep waters. And if that fails their unit, their field replaceable unit is a shipping container.
Yeah, yeah. Yeah, but at least it's one
shipping container. Well, as long as they only had the failure in the one since they have a cluster down there, but as long as like, yeah, don't do everything at once. It can be just one.
Yeah, that's the truth. These are, these are the interesting thing that we got to on this to me. And that is that the time is the automation itself is a is a version living component of these systems, right? So it's not just Hey, I've got something that's that's rolling over all the time. And I'm building automation to keep my system going, which I think we've identified challenges with that. The automation that you use to do that is also going to be changing over time.
There are a lot of equivalence between infrastructure management and database schema management. Yeah.
Yeah, because you can't try hard to go backwards.
Yep. And you'll find that one of the things that amdahl was renowned for back in the day, when they were doing in some ways, you could call hardware virtual machines, as they would always ship machines with at least two domains. But they prefer to ship with three and have three running whatever the current one is, and having a fourth is a test. Because the fourth one is a test could always be reset remotely, and read and rebuilt from the other three. Whereas if you just have to, you can't really rebuild, rebuild from the one because there's all sorts of things. So you have to have your test infrastructure as part of your report your field replaceable unit, so that you can do this automation, and when things fail, fix it and girl roll forward again. Oh,
there's a lot to think about. Yep, I this was this was great. I you know, the more I learn about some of these topics, the more more we have more more realized we have to build.
And I have to drop too. But yeah. Thank you. This has been great conversation, I will, I will try to come as much as as often as I can.
All right, looking forward to talking to everybody. Yes, like Thank you, just by folks.
Continuous infrastructure automation is near and dear to my heart. Our customers that follow this methodology, see tremendous returns. And it protects them from things like malware. And, you know, being able to reliably build and provision and get systems up and running, what they call repaving in their their infrastructure. And it's a really important concept, I hope that you will come back and join us at the cloud at the 2030 Cloud. For additional conversations, we will keep going back to this. It is an important topic and something that is near and dear to building resilient infrastructure, which we all care about. And thank you. Thank you for listening to the cloud 2030 podcast. It is sponsored by RackN, where we are really working to build a community of people who are using and thinking about infrastructure differently. Because that's what RackN does. We write software that helps put operators back in control of distributed infrastructure, really thinking about how things should be run, and building software that makes that possible. If this is interesting to you, please try out the software. We would love to get your opinion and hear how you think this could transform infrastructure more broadly, or just keep enjoying the podcast and coming to the discussions and, you know, laying out your thoughts and how you see the future unfolding. All part of building a better infrastructure operations community. Thank you.