Okay, so I think the question was, how do you, how do you shrunk out other different ways that rules guys have been animated for a common utility package? Yeah, very much. Yeah. So it's, it's not emanated, and there's three ways to do it too, but then two play. This is the other one. You can either take a direct dependency on that one case probably depends on the other, in which case you're not using PMF workspaces, which is totally fine, and then you're not, I guess I could pop up this pantheon workspaces. You don't have this construct where Pantheon is not going to find the first part of packages. They're not dependent on by third handout modules, and that's fine. I've seen people do that scale. And then if you prefer the NPM package format, then I would use J's library. At this point, I wouldn't link with NPM package because of the overhead, but by the JR back, and then it's actually copied into the package store, because this is much faster to do it. So those are really the two. Is
it fair to say npm package is now only useful if you want to publish a package outside of Bazel? That is correct?
Yes. NPM package is a dot publish
action target that gets sent to the macro, which literally called the NPM publish so if you're sharing a package with some other repo, then go that route. But not everybody probably would need to
give your repo Yes. So a dependency on
this JS library, if you have a dependency on that that shows up in your projects or obtainable,
not quite you don't take a direct dependency on JS library package here. So what ends up happening under the hood is rules JS will parse the log file. So PMPM has already found that package has added the details that package the log file. When rules JS generates the, I guess, root build file here. This is a generated function and the in linked packages that ends up linking all the third party packages that are in that log file and the first party ones. And so then it's available in the node modules as a sim link in the output tree for where that first party package lives. And then the actual target that you would have had on downstream, and it's it's just like, as you expect modules.
It's maybe worth pointing out that the at the use site here, you can't tell whether this package came from within the same repo or from another repo, which is actually a feature, because it means you can do migrations from many repo to monorepo more easily. I
I added a patch for this. But
is there anything to make the
migration from, like, regular PMPM or using basal so that, like, you're
not using that
same basically, the KPM log files kind of stop on each other a little
bit. I stop on each other two living side by
side. Yeah, you probably could. So the question is, how do you keep PMPM lock pills from stopping each other? Is this because you have different versions of PMPM migration
process, and we need, basically, to
not, not have everything be done through Bazel immediately. Okay, oh, I see what you're saying. So a subset of your workspaces want to be on Bazel. You can do it. There's a pattern. It's not in here. It's endless. JS, really quick.
Look, I can go to the first part of the answer. I think, yeah, sure.
To start with, I think you want to just migrate the whole reboot to pnpm and have one pnpm lock file that's managed outside of Bazel. And then it just so happens that if you point Bazel at that file, everything should still work. So I think your question about having two files topping each other is most likely just don't have two files, and that's the answer. I
think we run into some issues
because of using workspaces. Sorry, I'm not a JS expert. What's the like? There's a workspace and
js. Oh, it does well. I mean, if you have so anything you locked file, think I'm sending problem, anything your log file, Bazel will try to consume, right and save like a 101st party packages. But they're not all on Bazel and their lock file. Bazel very happy with that, because rules js is going to press that log file and try and do something about them. That makes sense. But this example here, it's called NPM, package rerouted. But there is the package json and the log file. They're not at the root of the repository. There's some folder rerouted here, and so you can create another pack. JSON in a subfolder with a subset of the workspaces, and basically uses that one. It just means maintaining root package json twice. But in theory, you can use write source files and make a copy of it, or pass another workflow. So you can do that way and then slowly bring on your first written package by controlling this campaign workspace that's for the ones that are available on
Bazel. To be clear that the pnpm workspaces feature is fully supported in rules js, are there
any things that are now supported around something that wasn't well supported, not
that I'm aware of, but
let me talk after wouldn't I?
Why would I go about jazz library?
So in our environment, why would I do the other maybe,
if I mentioned the list of it if a repo has so the question was, why would you use pandemic spaces go that way instead of just direct apps? If you're I think
it was, why would you, why would you link against the JS library if you could just link directly to the TS project above it? Yeah, right. Because in your, in your slides, you're showing this, here's a TS library that I'm going to link to the JS slide. Link to, the JS library below it. Oh, OK, yeah,
we inserted that package json on that JS library because that's part of the package. But if that package json is not required because for type checking mechanism, types that needs to find you could probably go directly. So any, any rule that I put to JSN code is like against, at this point, it has to be at the root package of that package, because that's accelerating to that that folder in the inventory.
Yes. So if I wanted to use the like starboard
implementation, does the aspect CLI like? Do I do? As a language extension into my existing Gazelle binary? Do I need to hold my existing language extensions to something that builds the aspect CLI as the integration point for existing
Okay? Aspect CLI, so, Alex, yeah.
So, yeah, we pre compile a gazelle binary that includes the starlark interpreter that powers that feature. So of course, if you've written Go code that you want to compile them to that Gazelle binary, then you have this problem. Of course, we can grab open source go extensions and compile as many of them in there as we want, but if yours is private, it will never be in there. So long term, the right answer is that we Gazelle should allow the extensions to be provided as binaries. We have a feature request on the gazelle repo from forever ago, where this feature is called Rosetta, and so it would be like a gRPC plugin. Recently, Shaheen on our team, did an interesting experiment of just using wasm like WebAssembly to like, precompile an extension, and then it's pretty easy in the Go Gazelle host to load that and then just have it implement the Go language API. And this, this proposal has been generally accepted upstream already, like the original feature request. You know, the folks are here in the building who work in gazelle, so I think we just need to find some time to do that, and then that's a nice solution, both for our thing that wants to be precompiled, and for yours. Now, everybody can precompile their Gazelle extensions, and instead of having to compile them from source on a developer's machine, right? That will be really helpful for rules Python, which wants to use Seago to access tree sitter, which is hard to compile on every developer's machine because they may not have toolchain. The answer today, right now, is obviously, like, you would just have, like, there would be two binaries. So, you know, users could do Bazel, run Gazelle one, and then run aspect configure, and it does a different set of
things. Do you support, like Webpack or roll off,
or any modulars, all of them. I opinionated. There's some there's some rule sets that we have specific. We have a rules roll up, and we have a rules web pack. But for any tool in the ambient ecosystem, pre generated JS binaries, and run binaries as part of the generated code that you just load, as long as that tool has a bin in smack as JSON, then we generate one of those. That's not enough, but in the FAQ and getting started, it says, I do that. Yes.
Is there general advice on how to integrate with other build systems, like we have one project, like an electron Forge project. And I mean, I could just wrap it in a single action to just build the whole thing, but that sounds kind of awful. Is there? Of what, what, what approach, or are there approaches that you've taken or seen or to, like, I don't know, maybe smaller pieces or something.
Yeah. So the question is, how to integrate with, you know, again, xj, S or an electron, something that does everything for you. The pattern I've used is to split out the trans violation out of those tools. So you can have JS projects in your graph. And instead of feeding the tool JS files which want to transpile, you just feed it the output the JS files. And then that tool was then responsible for everything happens after that point. So let's say button,
okay, assuming that Well, I guess I don't know the tools well enough, but that's a fairly safe assumption to make that they support. I
think so, because all the tools going to do under the season chaos style is Castile. That's fair. Okay, yes. Is there any progress on the progress on the ESM sandbox escaping? Yes, the ESM sandbox escaping. Alex, just talk to somebody. I think is an answer this? Maybe, yeah.
So there's someone in our on the team I was on, a Google native, guy who's very close to Google's like no JS implementation, who says that there is a way to do this that we just hadn't discovered before? I mean, like in practice, we would need to land a one line code change in Node js that permits monkey patching at the point that we would need to intercept the import statement, right the way that the ESM imports work, but they would never permit that, so we thought we had to fork it. We haven't tried this flag yet, so, like, maybe we'll know in a few days if that actually does resolve it. I guess, does everybody like an idea what we mean by sandbox escape? Is it worth maybe we should? You want to describe the feature real quick.
Yes. So when you're running in the sandbox, which is unilateral machine mode, we monkey patch, meaning we take the JS code that implements our node, code that implements require, and we patch it such that when requires time to resolve and it's following the symlinks out of the sandbox, we intercept it and say, No, this is actually a real file that's in the sandbox on a synlink. So we triggered it, and you can do that with the requires node doesn't let you do that with Main Boards. It's just completely impossible. So maybe it's new flag will let us do that. So we'll try the same approach. The other answer is, use remote institution, and then you have to worry about it because the assumption has sandbox built right in, which is a great answer, because you can type 1000 things in parallel. Now, is
that only required because the test?
Yes, that's correct. Telehealth,
yes, the question is, is it because of links? And it's yeah, the answer is yes, like No, try and real path all the synthesis and get to the actual file. And so the basil had a sandbox that were hard links or some other implantation, or it wasn't simply just work. I mean,
sorry to be clear, the failure mode with the symblinks exiting the Sandbox is usually that everything works fine for developers. It just wasn't hermetic. And so you it works when it shouldn't right, like it fails open and you can find files you shouldn't be able to find. There's also other tools, of course, yes. Build for examples, are in go and it tries to be node compatible, which is to say, always follow sim links, because NPM link with simlinks to begin with. And so that's sort of every tool is expecting to do that. Obviously, no matter what we do to patch node like this, go implementation still that goes and does that. I think we finally have a good solution for that, for ES build, after a few different tries, I think something landed in our rules es build. But you know, like new JavaScript tools are coming out all the time. So fundamentally, we're trying to make them more hermetic than they would have been otherwise. And so it was just like, Bazel can just run any tool and it will work. And the question is, can we make it even better than that? And like, give Bazel hermeticity guarantees to a tool that wouldn't have it otherwise?
David, what does the preserve simlinks flag not solve the same problem?
Question is, can use preserved selinks? And the answer is kind of humorous, because it would work fine with learn NPM, but with PMPM, there's a synlink node module tree, and so those Simmons have to be a followed for NPM. Yes,
you need to follow them to the sandbox boundary and no further.
Great. Yeah,
I'm using yarn, so yeah, some laughter from other yarn users here. Yes, I have other problems. I. To maybe that's a discussion topic is like, is pnpm the clear winner, or should there still be some way for yarn and NPM and other package managers? I don't know. Is this something people interested in? I'm definitely interested in understanding why pnpm was chosen. Okay, yeah,
why pmpm? Cool.
Okay, so there's no direct questions, and you want to raise up some topics for discussion. We'll just open it up to the room.
I did have one more question,
I think you hinted earlier. Is it only possible that one like TS project or JS library per build file in your rules, talking about, like, linking a directory artifact, major Okay?
Questions about multiple test parameters and, like, in a single build file? Yeah, it works just fine, like this. And this case, Project provides name FPS, so you can have as many pieces you want. In fact,
there's typically two, because you want one for the test code, yeah? But
there's, there's a special one that'll be named EKG, which is the default name that rules you guys will look for if it link that package.
That's by convention. It's just because you're using a package json there.
Yes, so that's we just inserted it here and there. That might work passing the package JSON to the data the TS project. I think this is a cleaner pattern to read, but yes, this will run with of this package. Yes, they will. In fact, there was a feature request on the same company that wanted the one prototype package to make it work with JS run dev server. To have any changes to that, because that copies into a custom sandbox
such that I can handle it.
What's your take on the integration of like beat file system watcher with ibiz? Because developers are used to having beat watch the file
system for something else. What's your think about this? Question is about feed file, washing file launching versus five days over launching? I think the answer, from my perspective, for now, is there's a lot of companies that talk to that are just not using Bazel for open development because the tools for open development are so good, outside of Bazel and ibazel is just adding overhead, and so then that's my answer for now, until Bazel and ibazel, or whatever we come up in the future, replace that experience. The overhead for the platform team to maintain both build systems real, but developers are much happier than not using Bazel locally. So I think we need to find faster solutions. Because why developers want that? You know, one second,
pretty much time. But I think the solution is actually useful use ipzel to start beat, so that if you make a change to first party package I basil, will restart beat, but if you're making changes to source code of the application, beat will rebuild the application itself. That way, you get very fast iteration cycle when you're iterating on the application, but you still have the ability to change the library
Yes, more questions. I was
curious about the overheads you mentioned.
I'm sorry, could you repeat
the overhead for ibasel, yeah, so, I mean, eye Basil is not bad in terms of the red there's, it's launching the ballot system. It'll kick off a nasal that under the hood and rebuild the artifacts that was, artifacts, the output tree. And then, then it'll get, sort of pick them up. Does the copying to the custom sandbox, which we wrote so that dev servers could actually watch the file system, because essentially, you have two layers here. Now I Bazel is watching the file system and the dev servers watching the file system, and if the dev server's watching the output tree, it's not going to be happy because of the same links, and they're not updated because it's a sandbox. And so we actually copy files over into a custom sandbox so it's real files that watches, so all that stuff adds up, and even if it's an extra 234, seconds, then developers often are not very happy. Yes, more questions. One of the issues that we had. One
of the other issues we've had with using I Bazel and impact is that there's like so many file watchers in place on the system, and that's
going that's a good bit to, like,
sort of tell, almost like, exclude from watching certain files
in web tag. And then we're starting to look at, like patching ideas on, maybe
upstream something so that you can say, like, No, you don't really need to watch all this stuff, and like the external repos, or like, any of that stuff, because we're not really changing it that much. We only care about like sources in a specific direction.
Definitely areas for
improvement. Yeah, there's
there. There are two
different systems under Bazel I. Bazel has a protocol where it speaks over standard IO to the system that's being run, and it tells it it's time to look at the file system. But it doesn't tell which files are new, so it has to watch them all which is different from persistent worker mode when it gets a work request, I believe it does tell it which files are different. And so if there's a password made to ibazel, I think it would be a way to like, propagate that list of files. But then, of course, whatever's running underneath the dev server would need to like, read that from standard in and then, instead of watching, just update directly from what it was told. I think this, I mean, we discussed this for years. It's just never gonna happen. Never happened.
Just quickly to add a discussion topic.
I even with P and VM, just the volume of files that we have to deal with, JS builds,
it's it's tricky. I
don't know if the solution
is mentioned yesterday that bubbling up DTS files, which I thought was really clever that we can talk about the we can talk about community solutions to that problem. Strike also said that they're producing the squash, I guess, images of their novotels directories.
Sounds good. Is there any other discussion topics both of these two? Yeah,
I guess, kind of question about your roles, but also any potential discussion topic type checking? Do we do it during the build actions or separately so you can, like, compile without checking. That's a good one.
Okay, I think let's talk about PMPM why? Why we chose PMP? That's actually a great question. And so if you remember back to the rules, not just days we had a repository rule for yarn and a repository rule for MPM. That repo rule would depend for records. But I won't go into that. We'll
bring it back soon.
That repo rule would copy the package json over to the external repository. Run yarn, run the VM package manager, do its thing. Maybe there's a cache on the computer. Maybe not. If it's cool computer, there's no cap on the internet. Lay open rules. Then we have code that would parse that find all the packages in there, put the files in there, make them the payloads of build so monolithic action repository will pretty slow if you developers are often combining a large repositories, you change one package, JSON file, and suddenly you're waiting four minutes for yarn through this thing, or NPM do this thing, and
those are also using those tools to do the network access. It doesn't use Bazel downloader, so you can't use Bazel downloading
cache. So this using the tools of downloading. So we didn't cache, cache it with Bazel downloader, and so we're examining the lock files to do both mkm and yarn would list all the packages in a lock file, but there was implementation details in both packet managers which would determine how you get laid out. So depending on the version of yarn, you may get it for node modules. I've even seen incrementally where yarn would produce different results which were but so long tail doesn't packet managers, essentially, you couldn't go from lock file to say, download this factor. This factor is this factor separately and vitamin node modules, because you would in order to put them there, was hoisting right so sometimes we can hostess without other ones would get nested. And the exact layout of that was buried in the implementation of the stacked vendors basically makes it possible, from visual perspective, to separate that action at the small action. Is and then PMPM came around. And then lockfile is beautifully rich. You have the transitive closure of all packages, and it lists their defenses. So cool. Now I know which ones on what in lockfile. And then the synlink node module is true, which I guess I'll show that, because that's pretty interesting, and that's that's key as well, why we can't use reserve sim links, but it doesn't get links faster. So the node module stream, the way it's laid out in PMPM is something called the virtual store, which, yes, we just call this the package store. And PMPM is called the virtual store, because these are actually hard links. Each file in PMPM is in a content addressable store and has hardly any of that. So they call it a virtual store. But essentially, under pnpm, you have all packages of transit closures, so they might be multiple births of the bar at that level, and then under that just a node modules folder is the actual bar, and that's where bars paths are. And then next to bar and its modules, it gets its own, you have symlinks back to the bottom of the original store to all its all its direct dependencies. And then that's how it's constructed. And so with rules js, we just, let's just redo that in the output tree, which is why we need Bazel six for the undeclared symlinks, because, well, we need similar support in remote execution. First of all, which faded, added just in time for basal six, which is awesome. And then undeclared symlinks meant that we could declare unresolved panchayats, unresolved symblings meant that we could declare those same lengths without even having that face that it's pointing to as an input to that send me function, you just make the same length and it's relative, and basically just kind of said that you have to put it there when you're you're running your thing. So that's, that's the long answer my question,
if I add one bit to it, sure, Greg
is very modest, but the reason he knows all of these details is that we didn't use pnpm. He ported pnpm to starlark. The middle third of it, like the, there's the part where users interact with something on the command line to update the lock file. Clearly, that's still pnpm. And then, you know, like when the when the program runs, it encounters this node modules tree, but like, it's a starlark implementation, that is exactly the Battle of
pmpm. So here's, here's the output tree of that PMP workspaces. There's within node modules. I needed aspect of JS because I didn't want to enter pmpm. And I noticed that's because your first party does, which are send links back to the output tree where that first party lives, unless you're using a damn package. And then that should be a copy of that first party package. And then here's the transitive closure. These are all individual Bazel actions that create this so lots of actions for send links and yeah. And then we we download the tarball from the package registry, and the action extracts right into that spot. So it used to be that the action but the download would happen, it would extract in the external repository, copied the directory rule. Suddenly amazing bugs around that one, and Bazel didn't like source file source directory inputs in some cases, and so we just switched it to extract right into the virtual store, unless there's a patch, because the patch has to get a pocket in the cell repository, or there's a lifecycle hook, in which case, the lifecycle hook is the one that outputs the result of that lifecycle. So by itself, those are actions. Okay, well, they'll run on the execution platform that you
Okay, so second topic, large,
it leads right into that, right? You just said, Oh, we unpack the tarballs. Oops. Now there's lots of files, right?
No, but I was just gonna, I think also with yarn and node, like, it was really annoying for people who are not JavaScript developers in a maripo and other like languages, because that, like, hey, repo will run yarn again, if someone will run even when you were working on like JavaScript,
yes, yeah, the eager, the eager evaluation of repository rules because of workspaces, because there was no choice, because the result of that repository was on reference in the workspace. So things look like, I gotta run it. Just understand workspace. Yeah, that was super nice.
Yeah, so I we figured out how to solve that. We actually so we built a repository rule based on yarn. It does have a lot of have a lot of the downsides with still solving some of them. We. We actually ended up making our node modules folder into a package. So that only thing, because we needed the load statement to like reference the linking macro that we generated. I think you're doing a very similar thing,