Hello, my name is Rob Hirschfeld, CEO and co founder of rack and this is Cloud 2030 podcast with a continuation of our technical operations series. In this episode we are talking about system processes, specifically system D. But how do you manage and control processes, services, fundamental components of Linux operating systems. And in this in this discussion, we actually go very deep into sort of how to think about it, how it works, we talk about alternatives, process controls, even how they get applied to containers. Smartrac, we started with containers, which is a nice bridge from our previous discussions, were talking about container management systems. And then deep deep deep into system D, if you are interested in Linux and Linux management, Linux automation, this is a good episode for you enjoy it
did you play with the, like the container was rocket that was supposed to be a system D managed container system. And I was like I kept expecting it to get more popular than it is. And I'm so I'm a little surprised.
Using Go ahead. Rocket was deprecated at one point. And as far as the system D integrated container systems, system D and spawn D, is what replaced it. A bit yeah, it's just the for the most part. There hasn't been much traction, or demand to run containers from system D. As far as like standalone container runtimes. Docker is still the most popular. Not necessarily the best, but the most popular. And if you're uncovering is typically you use whatever your
the subsystem gives you. Will did Docker pretty much split. Like I know that with the way the open source pieces work, you ended up with container D as a subsystem for whatever the the, you know, Docker or proband. was. Yeah. And then from that perspective, you know, Docker became more of a UX for the container interface, just the way I usually thought of it.
Sorry, yeah. Okay. With it, if you install Docker container days still bundled, but so I
vary between us if you're if you're using the cube cuttle. It's it really is not Docker, it's using Container D right. Behind the scenes. Yeah.
I mean, in some cases, it could be another runtime.
This is what sort of weird right if containers had been done as a first class citizen from the start, then it would just be an OS V. No F capability. And you'd have different ways to interface to it.
If you want on, always capable of running containers as FreeBSD but was jails. There are some websites who try to make containers of first class citizen or at least containerized approaches. I don't think it's quite at the kernel level yet, but we might see that in the future.
That's okay.
would make sense to me considering the way we're treating it and BSD and Solaris two because we were zones. One I had early exposure because we were doing zones in Solaris for four. There was a product by join that was entirely based on zones.
I mean, to be fair, I am personally I am in doubt on the side of microkernels. So I'm happy that it's not been brought into the kernel itself. And I don't think it will ever happen. In Linux, the bug is just too much to do to put into the kernel. But I can, I can see, perhaps, purpose built kernels being produced, or developed for essentially running containers as close to bare metal as you can. But ultimately, you're still, at least with modern container systems, you're still relying on cgroups on and off features that are part of the Linux base.
Alright, I actually, we're crossing into one of those things that I would, I would love to have explained. Because I, like I use and say cgroups with a degree of confidence, but I'm not sure that I'm actually comfortable in what C groups are. How would you describe a secret,
um,
secrets are a, I would consider it as a process sandboxing feature. It just happens that it is either being really, really useful for doing it. A sculptor is not just a process, but a collection of processes. So like, it's really a process and it's true. Okay. And that's essentially what the container is. Okay. It's just a process that the colonel starts and except that in the scope of the process, that process is PID one. And its children are subsequent PDS. I mean, it's, I see it as an as an evolution of what was the cult? Of shrewd? Well, that's
why it's, that's why it's secrets. Okay.
I don't know the terminology.
The C stands for control, and C. Okay, control group got it. And that's exactly what they are, is they allow you to specify, you know, you get this much CPU, you get this much memory, and then, you know, as, as cloud said, right, they're all child processes of that, you know, PID one, gotcha.
At least when implemented as intended, their processors are also not, not supposed to be able to see anything outside of their, their sandbox. So the, they're essentially pretending to be PID one, under the, and they're isolated from the audit processes. And
that's different than virtualization because they're still running on the at the same kernel. Yes, we're, we're back to the kernel, the kernel being it's a it's a pin one, that the kernel is the same regardless. Okay.
Yeah. So so it means that the kernel takes care of memory management. So you can, you can bin pack your processes much more tightly. It also means that you don't have the, the admittedly minimal now overhead of doing any of the hardware transformations. And again, I admit it's minimal Now like with BD VTX, and VTd does almost native. The downside again is you share the same kernel. So if if you try to run processes that require a different kernel, then you're virtualizing.
Right, that would be the move virtualization will give you even a stronger boundary from a hardware hardware isolation perspective.
Right, yes, for the most part. And there are some new features that are being worked on in the scope of conventional computing, that give you the same kind of strong guarantees in in a container so scope. But yes, like assuming everything else being equal VMs give you a simpler to prove isolation
later, too, in this, this comes back to the thing. If you're just doing cgroups, or just doing C groups, if you're gonna see groups from a container isolation perspective, is that really that different than just starting different processes on in in the OS? I mean, I guess there's a file system piece, but under that also, from a container perspective,
from a performance perspective now, from isolation perspective, yes, I got like, as I said, the cgroups are because it the process in a secret runners runs in a sandbox. Everything else on the host is invisible to it.
So then this is where you're where we were gonna go with this was with towards system D. And, and sort of process control and management. I guess one of the reasons why it's not as simple when you look at running running services in the OS, if they were all sandbox, and they couldn't actually be services, or could they end up being a problem? They
could be, but you will then need to have stronger guarantees on the host, that the processes are not able to interfere with each other. On For example, if you've ever tried to do the right new system D policies, I'm sorry, I'm not using the SC Linux policies, okay. But that's a real pain in the butt. Pardon my French was. So being able to do that easily in that container environment, I'm on being able to say, if I run this process, with with these specific settings, then it cannot escape it sandbox or at least as as long as the strongest to the kernel implementation is correct, they cannot escape the sandbox, that is a very powerful thing. Because then you can start running multi tenant workloads on the same kernel.
So I guess,
multi tenant workloads would, alright, I was going to pull us in the other direction back towards system D. And if it's alright, come back to multi tenant. Because, right, this there's a they're saying, Greg and Greg, there were saying, you said ages ago when system D was coming out. And at the time this way back in the early early rock n days, every every every Linux distro had its own system manager. And you had you would stop and start system services differently depending on what what Linux you were on. And then system D showed up. And everybody was was still is angry about system D. That's
because upstart tried to replace this v net. And then red hat came out with system D, which was basically their answer to upstart while upstart had problems. I feel like it was significantly better than system D in that it didn't try and control the whole system. Right. I think, I think the biggest complaint I hear about system D is that it violates the kind of primary idiom of Linux in that, you know, do one thing and do it well. Whereas system D tries to do everything all at once everywhere, you know, except
that's not quite true. Because I mean, system D these days is a family of products. There are system D in it, which the editors and stem, there's this system D component that manages file systems. There's a system the component that managers user sessions, system D and spawn D for for containers, on so on. So it is still modularized it just has to be under the system, the umbrella and using the system to API's,
but that's kind of my point in it. is that, you know, it does all of these Yeah, you can call it modularize. But it's it's one big system and link, you know, things, things like turning simple text logs into a binary format that you can no longer parse. With a simple text reader, I mean, these are all kind of come coalesce into this one big blob of a thing that now controls the whole system. And you really can't extract these different components, it's not like you can replace system DNA with something and then still continue to use the other modular bits of systems either all kind of intertwined and interlocked.
To some degree mean, yes, you AC and replace system D in it, then the other components, or go to System D components are on that are going to be harder to run a bad deal. The other direction is, is not applicable. So for example, if you replace journal D with, let's say, our Syslog ng or something like that, it's still going to run. You just don't have terminal control and all the other features around it. Furthermore, like, I would argue that, like this whole, like, do one thing and do it well. thing, when it it's flown the coop a long time ago with the Linux kernel being monolithic. If you're gonna complain about like, one thing, controlling a lot of things, all the things the Linux kernel is it's not exactly in the center on that regard. But like, if we look at the history, there's a reason why it ended up being integrated and as performance. But
when we talk about, but when we talk about, you know, do one thing and do well, you know, we're talking about userspace systems, like system D, we're not talking about the kernel, the kernel is the, you know, the thing that makes everything else run.
Bottom line is,
so let's Yeah, go ahead. No, sorry.
I'll let you go. First.
I was I was just gonna say, I mean, I think we're, we're starting from a point of knowledge that I was I want, I was gonna step back for for from a second, just write, I think, you know, the basic idea that, you know, the operating system has a kernel, and then there's a whole bunch of services that get run on top of it. And then, you know, the operating system isn't actually working until the services are started. And then the services have to lifecycle they have to be maintained, they have to be configured and set up. You know, hopefully, most people using the operating system know that, but I don't think a lot of people really think through this idea of like, wait a second, I can't even access a disk or a device without interacting with system D and doing some configuration and in that, I think it's very confused. I know, it's confusing to me, when I'm like, bouncing my way through Linux, and trying to make something work or not work. And um, you know, a lot of times I'm interacting with a service, and I don't even realize I'm interacting with a service because I have a CLI for it. But there's, you know, system D pulled a lot of the things that used to be, like, handled as all these different Spot Instances into at least a consistent place.
And that's, I
mean, everything was in in in the in it directory with with CSV and ET. I mean, well,
not everything though. It's actually an at ESU II would start services in the init directory, but you had real problems. Like for example, if you had a service that needed to wait for a certain volume to be mounted, before it was even usable. You had no primitives to do that. You essentially had to do are like a retry loop to say, Okay, I'm the system ready now. Similarly, like, with things like, like a farm, our home directories are encrypted home directories. If you were running into race conditions, they're like, these are real things. That's system D has solved, it has solved them by integrating these various services. But from a ecosystem perspective, I see that integration as having been necessary. That doesn't mean that system D is irreplaceable. Like there's plenty of, of distros that that have decided, okay, I'm not going to run system D at all, I'm going to use Uber and RC or stick with six five in it. And then continue doing things like you know, decoupled way. It just happens to be that system D has provided a practical enough ecosystem that has been widely adopted. I would draw parallels between this. That's a cool thing is like you don't need to learn as to run containers. You don't even need Coronavirus to do container orchestration. There's plenty of alternatives. It just happens to be popular because it gives you all of the tools that you need for most of the use cases. And it's been widely adopted enough that it has plenty of support and documentation.
Wood I hadn't even thought about I mean the way I the way I always have seen and heard it is basically system D is the scheduler.
It tries to beat a scheduler Yes. Okay.
I mean, it was it was it was miserable. I mean, Linux is we're already we're a nightmare. Already are a nightmare, you know, 10 years ago when before system D came out and started to create some standardization. Right and then and then it became much easier to be, you know, because because now I know how to restart services consistently on every operating system. I can do a journal cuddle, which should see the log, right. I mean, this is I guess part of my question, right? How consistently should I expect to be able to new look at journal cuddle and see the log output for a system or system status system D status, or system cuddle status, as long
as you have to sufficient permissions is pretty much guaranteed, which is a huge reason is a built in profile. Because I remember the time when before that was six, five and that were I had to go hunting through var log and trying to figure out okay, which one of these logs is the one that actually has information that I need?
Only checking very large. Yeah, things are all over the place, not just for our log anymore. Exactly.
So the fact that system D has provided a unified and consistent interface for creating all of these logs, and the fact that I can easily filter them on on with good performance. That was a huge use of all the improvement for me as a sysadmin.
Now, again, the assets faults, like it's not perfect, but it's farther than it was before. Right?
Is there like something you use from a system d like to manage us like, system Dittrich something that is sometimes for me, it's always a surprise, like I only use system cuddle or journal cuddle and I'll follow a log. And it's been pretty nice. But sometimes there's I don't get much information at all out of that. Like I can tell this busted. And I know we have a ton of logs view into that journal cuddle, but I don't know that it's always helpful. And actually, I have a follow up question, which is are is the expectation that that's a law basically log d. We used to be what I used to think of as log V where you would just subscribe, subscribe to log output. How is that different?
I mean, it's a mixed question. So the usability of what goes into logs largely depends on what logs the service admits. If, if the service doesn't doesn't tell you why it's being killed or why it's failing or so on then you The logs are not going to help you even if they're centralized. I also wouldn't consider it necessarily as ethical and to log the CU, would position it. I mean, it's it is back as long as the services are uniformly configured to emit the logs to journal D. And as long as you know, as long as the verbosity is correct, and the details are in there, yes, it's going to help you. But I would not consider to be the sole the sole reference for for, like debugging a, or trying to do root cause analysis. I'm, I've been a long standing, like many years now. supporter of the, of the three pillars of observability. And logs is just one of those, like, you still need metrics, and you still want to trace us, or at least some APM, or some kind of debugging capability.
system, system D ultimately is really, at its heart, it's just a process manager. Right? I mean, it's the logging the output from a system system D is its process awareness. How I guess if you, if you and this is part of what I'm not sure of it in system D configs. Is that where you would determine how much logging you want to get? I guess you can you can up, you can talk to a service and increase logging as part of just a service definition. But what you're describing to me is alright, you know, I have a Prometheus metrics point, which is different than what what you're configuring in system D. That affair?
Yes. Okay. Yeah. So I see a system the RS, a service orchestrator. And what you will, what you will get to all of the, let's say system control status, or the the journal D logs, is information, you need to do orchestration tasks. But if you get to a point where your service has an unrecoverable failure or a recurring issue, whether it's performance or availability, I see that as going outside of the scope of system D itself. Or even the tools that persist our build our old system D.
Interesting, I think I think you're right from where I was, where I was going with this system D is really that's really the design of the surface, right? Is what were we just jumped into not not the basics of I want to start and stop a service, I want to be able to do some log log mode management of it is should should people look at system D is more than that?
Well,
right, I think in some regards.
We
beat up system D for not doing enough in certain areas and do too much in other areas. And really, all I want from system D is a Linux wide unified service management that'll handle service restarts and a generalized logging platform. So I expect from it and to me, that's the fundamental thing that's made it really useful is that it is allowed the Unix just the Linux distributions to standardize the method by which you run a service. So that doesn't matter necessarily. Especially now that Debbie and and canonical kind of hopped on board reasonably that, okay, I want to run a service and I need it supervised. Instead of having to roll my own demons system, or whatever. I now have a fairly standardized way that this just kind of worked. And I, as a provider of a service don't care.
It's also a bonus on developers because it gives you a consistent interface that works across all distributions. That's like, if you write a system to Unix that works on Debian or Ubuntu, you can work on Red Hat as well.
And so system D just becomes a control now, could it be used to do more sure people trying to stuff container monitoring services and control services into it? Okay, I get it. And it's one of those, the countervailing thing is okay, something like a red hat, core OS, or whatever it's called No, right? Or a roll your own kind of container container is the basis of all interaction with the system. Okay, it's a different path. And you don't even have a service system, because it's controlled by whatever you're using to start your container subset, right. It's just different. But in those environments, you're like, Okay, I got a different plan, I got a different story. And so it's not the same kind of thing. Right? From RackN. perspective, we like it. Because instead of having to support what is it five, six different system startup services, we now have to deal with? Three, right? The one in ESXi, the one in Windows, and the one in Linux. And then whatever the heck people are thinking whenever they do, roll their own core OS like system control system, right? And usually, the appropriate answer to that is just get the heck out of the way.
So in Illustrator operator's perspective, it's normalized the space know how to normalize to right, you may have lost function or direction or path or component tree, but that's fine. Right. Me and that's an aspect of normalizing a space, you may have awesome function we loved. But okay.
What I mean, when you so if you look at a service that is well designed for system D, or is appropriate for system D, I mean, I'm curious about what you what ways is a good system D service versus what, what's something that's not working, or Miss Miss fit. There's that RAM, I'm trying to think through, like if I'm sitting down, and I mean, this happens. I'm trying to use Linux, and there's some services that seem like they, they, they're, they just operate the way I expect them to. And then
Well, part of that, shooting the service from the system. So like John originally mentioned, one of the biggest headaches and heartaches of what people perceive as the system B environment, but it's not really system B at all. It's how did the underlying OS that is using sifter system D is its service system, choose to implement its networking service? Right? So you can say, hey, what do you system D network D, or I wanted to use system D, in network manager or whatever red
hat looks a bit this was super confusing. It feels like it's always different than what I expected to be system
B is a manage as a system to manage services and restart and one shot and all those things and sequence in order and depends on all that stop challenges. Okay, that did an okay job normalizing that space, but then people managing a system is more than just starting or stopping a service. It's it's configuring whatever, it's doing all these other things. Right. And, okay, we still have people choosing to roll their own on certain areas. It's like, okay, that's fine. But Network Manager files together files versus Ubuntu, and it's et Cie, you know, interfaces file or whatever it is. Yeah. Network Manager versus right. Okay. So fine.
And then you got NETPLAN Yeah, I
this was confusing, right. So what is
Yeah, why isn't that just all system D like, configure my network while
it does is now to Linux distros.
Like if you look up newer releases like a boon to 2204 and later system, the network D is the network manager. And then what is can I go known as Network Manager is now just a user interface for that, before used to be the network manager itself would want to manage the interfaces and system D didn't do that. I had horrible experiences.
Okay. So it was it was a factor in that system D didn't actually manage the interfaces, which I which, okay, I find that started a service.
And depending on here and underlying system name, that service, you either waited appropriately for network service or not, right, it's still some of the heartaches around. All right, did you really do a network service system or system, system B services kind of management stuff, right? Because you still have, we still have problems with our services, depending on which OS you're running on. Because not everybody created normalized networking targets. So most of the time, it's covered correctly. And most of the time, we've rewritten our service where we don't care.
And this is particularly problematic when something generates environments, like on Docker, runtimes, because they write their own firewall rules. Yep. So if you're applying them in the wrong order, you you can get into big trouble with systems being exposed, that shouldn't really be.
So alright, I guess half of my problem is that I keep thinking that system D is, is a more universal solution than it that it is or than it than it had been. Okay.
And it's, it's a, it's, it's an ecosystem, and at its core is service orchestrator. Yeah.
Meaning that if I start, I can say this service has to start after this service, or before this other service, I can create some dependencies in the graph. Yeah.
And this brings us back to the big question about what is a or an ideal, not necessarily ideal, but a good system B server to it's one that takes advantage of the various event definitions that system D gets you to make sure that it starts after all of its dependencies have been brought up, and they get when it stops at the right time as well.
That makes sense. I mean, that's, that's a big deal. Because I know, one of the biggest challenges of starting starting, the NOS is actually getting all the events to start in the right way, getting events in the rug in the wrong sequences of major, major problems.
And as a side effect from that, you also get like automatic service restarts. Like, um, it's done by system D itself. I remember the init scripts necessary to restart services on trying to code for the various edge conditions for that it was horrible. And now it's like a 20. Line file, and you're done.
Do you remember having to manage pit files? Yeah. Yeah.
My god. Yeah.
No, and like, we always we'd always have to write up head tracker. App.
Yes, exci service started.
Oh, because it's an older it's an older Linux, it didn't adopt any of these kernel changes, which
they have their own kind of system vibe, like, net script system with all sorts of other crap. added on top, right. Okay. And then windows, right. They have their own service.
They've had their own service manager forever. Yeah.
It's a different, completely different system. I mean, it ends up starting and see that's why for example, we don't necessarily try and do service integrations right. Where you actually want to draw a hard line. Because boss is right you could Right a C set of additional theater durations to hook into the system D bus to catch events and stuff like that. Okay, you could, but And so and that could be helpful for certain services like the Device Manager and other stuff like that, because it'll pay attention to the device, bus and other things like that. But our service isn't that way. And so, because we write ghost off, for the most part, for this set of services, adding dependencies on C libraries for system events, keeps it from being portable, especially, for example, our runner agent, which we run as a service under System B in a lot of environments. Oh, that's true. No service, we also run it as an ESXi. Or we love to run it as ESX servers, but you know, the Go compiler that they publish, so, okay, all those service, it would be kind of pointless for us to have these integrations into our system running, right. And so keeping a reasonable separation, which system B allows between your service orchestrator and your service, right? Let's see, you actually have more separated concerns portable, code base.
So when you say they're separated, that means basically, you can say, here's the binary, here's the configuration. Starting from keep, keep going, okay, are derpy
server and our derpy runner, they don't actually know or care that they're running under a service. Okay, the fact that the service is done keeps track of the fact that they were started, that they're paid, right, in some regards, like Docker and pod man and some of those other container management systems, it's just at a system wide, kind of arbitrary program perspective.
And then you're interact, you're making a service call the, the service. And
so the other aspect of system D, right is earliest, our common usage of system D, becomes a story of keeping system D at arm's length to do what it needs to do, and keep those two things separated. Right, we could use a lot more features, but then we become very tied to system B. And we don't need it. Right? The fact that it has a feature doesn't mean you have to use it
would be a benefit. Not? Well, so like
if DRP client really cared about actively tracking device state, right? You can say, hey, instead of, or services started, or things like that, it could register onto the device, bus and the system D bus and say, Hey, give me events. For when these things happen, I could catch those. And then I could either choose to notify drive, whatever. Okay. So that's a integration path, potentially quite viable and valuable. But for our usage in our patterns, we do that other ways. And we like it or not. So you
could with the system D infrastructure that you have, you could watch it, assuming everybody's playing fair, which most people sounds like are at this point, most services, you could actually monitor the system D event stream and say, Oh, somebody's attached a device, somebody's removed a device, somebody's current,
or the device subsystem that system B gets those events to, okay, are the system D services?
For device a batch runs, typically you're not looking at D bus directly itself.
But the same concept, there's a system D bus.
Yeah. And that would and that would let you I guess I'm trying to think through what what a use case would look like from that perspective.
Well, because Go ahead,
a made up one right as if AR AR rather actively reported the current device state to all systems, right. So it could watch a bus and say, hey, somebody just attached to USB drive, throw that as an alert because nobody should be attaching USB drives to the system.
Oh, okay.
Right. Okay. That's a thing. It's just not a thing we choose to put in our service.
Gotcha. But if we did that and did it from a system D perspective at all, it'll still be a Linux only service because it's a right system the thing but you would, you would at least now have some consistency across the board or if somebody was stopping or starting services, I guess that's if you're running a compliance compliance check system on a right if you're doing a governance or a compliance audit system, your
system D mess events and say hey, did somebody start up a system D service that I wasn't expecting? Okay, stop it, kill it. Notify alert. Okay. Kinda watching the watcher problem? Yeah.
At least I know, that's something I didn't think about system D, like, I know that you can use it the start and stop the services and, and pull the logs out in a consistent way. I hadn't thought about the eventing side of it from being able to say, oh, you know, I'm actually watching the code, assuming you have privileges, watching the other services, that perspective and what what's going on in the system? But that's, that is not waiting. I mean, if you're starting to do that, then you you're in some ways, not letting system d do the its own orchestration, you're supervising it.
Well, there you're talking about an auditor monitoring function, which, okay, that's a thing.
It's different.
argue the value of it, right?
Well, I guess part of what what my goal was for for this session, because I think I know for myself and I'm being I'm trying to proxy for understanding systems better, is, you know, it's okay to be able to say, I understand system D enough to start or stop a service or look at its logs. But there's, you know, we're discussing now, an element of like, okay, with its, you know, these are things I could do with system D here, these are things that are available in the system, that are broader than just, you know, what services are running in that perspective. I do have a question that I always find confusing, which is where the heck should I be looking for system D? Like, if I actually want to see how a service is configured? I mean, I know, I can just say system, the status, but you know, Can I can I actually inspect or mess with those service configuration? Or system D? Assuming I can, I just don't know where to look
at C system, the system, I think,
and then the name of the systems, it's just that simple. Yeah.
Sometimes yes, except for user provided systems, which are in user. And you can also do system control, edit our system, in which case, we will be giving a blank file where you can add overrides for the base, though
then you're also just seeing the files doesn't necessarily mean the service is enabled, there's also a set of seven game that goes on behind the scenes.
Yeah, but the other thing also is you can do system control show and then the system name. And that will give you the accent essentially that the render configuration that it's using
and this might be more than than what's in the actual file because it it also breaks in like defaults from our system D itself is configured
so you could you could use that to actually sort of look at what the composited file inputs are from that perspective.
Yep, the composite inputs as well as again like all of the constraints, but like for example, like what is your like? Like, you get things like okay like, how many bytes does file service read or write? is like it is there out of memory limits and in place? What's the umask that the service runs at? So they get like in terms of a deck a sysadmin perspective, this gives you a lot of usability that you will have to manage manually before it says five minutes
so that consistency isn't isn't as important in part just because it made it made all the system writers conform to a standard process instead of back in the day when you you there was good, you sort of faked it or you you carved up a way and then mapped it into each operating system which is But I think I remember doing.
Not just that, but it also reduces boilerplate. You don't have to write as much to get a service started. Like, again, the 20 lines. It's about a standard system, the unit file. And you can do a lot and also your lines.
We actually have ours, like five or six or seven. And then we actually take advantage of the merging capabilities. So we have our base system D file, that's just it's simple starting point, and then
we inject
sometimes upwards of 20, little specific customization files, or handling things like what's the idea with European point? What's the user it should run out Nura? What's the name it should use its port itself? What's it right? All of those are specific customization to help spur visualizing and saying what each element is.
Writing a big file, you can just do it atomically element by element.
Yeah, yeah. Basic capabilities that interested officers I've entered either, like you can you can configure a service to be pinned to a particular CPU.
Yeah, that's true, too.
Cool, all right.
I'm thinking I'm thinking we only have a minute. And I need to wrap up on time today. This is helpful. Thank you. Next Next week, I was hoping to talk about out of band management by the way. There's more insistently, but I'm, I'm still this, this moved me much further down the field. So sure to going into actually what those configuration files look like, just possibly its own own topic. Thank you. I appreciate everybody's time today. This was this was helpful for me. I feel like I learned a lot of kids. They'll love everybody else did. But thank you. Appreciate this will be good. Are you ready talk next week. Thank you. Thanks, everyone.
Wow,
this is really important in how we are building and talking about systems. From an automation perspective, it is really critical for everybody who's touching infrastructure to understand these core components of how we manage and control Linux, Linux system, Linux processes, and identify things are doing a good job and opportunities where you could be building compliance governance and monitoring systems on top of this type of infrastructure. I hope this was helpful. We're gonna keep going deep and wide in talking about infrastructure and automation topics, please come back, bring your questions into the cloud 2030 tech ops discussions, you will help us make this what is ultimately a course in automation better. And looking forward to seeing you, you can learn more and see our whole schedule at the 2030 dot cloud. I'll see you there. Thank you for listening to the cloud 2030 podcast. It is sponsored by rack in where we're really working to build a community of people who are using and thinking about infrastructure differently. Because that's what rack end does. We write software that helps put operators back in control of distributed infrastructure, really thinking about how things should be run, and building software that makes that possible. If this is interesting to you, please try out the software. We would love to get your opinion and hear how you think this could transform infrastructure more broadly, or just keep enjoying the podcast and coming to the discussions and laying out your thoughts and how you see the future unfolding. All part of building a better infrastructure operations community. Thank you