I'm Tessa, and it's a reminder, if we record this meeting, it's easier for taking minutes.
Yeah, we typically record it and I just started.
Okay, great. Thank you. Okay,
thank you. And then also, Fernando, any announcement, was this for this meeting, please? No, I
do not have. Thank you.
Okay, thanks, Fernando. And we are sensitive. And so um, so this meeting, I think we are going to make a working meeting that will try to discuss the timing protocol for epic supplements. And it's many it has been circulated for the last half year, I hope that we make progress in the in the very recent time, so that we could formulate or we can condense this idea into consensus, consensus and making to the defense documenter for the data acquisition system. And so can be useful for both the premier design review and also also also in the coming TDR and sinfully sense to William Cooper, who put a lot of work on designing the innovation in some of the specifications on the timing protocol, which we'll discuss today, as well as carrying out to the actual text of the of the proposal hardware, close energy of service, close hardware, so William Wilkie presentation today. Meanwhile, earlier this week, we also had a StreamReader, the working group meeting that found the offline point of view about how invasion, offline possession coupled timeframe data, I think it's also closely related to our working group, and instance where as a data source, as well as it relates to to Williams talk. So I think that, let's try to so let's do a one slide summary from Tuesday's meeting, before I handed the microphone to William. And so let me quickly stop here. And so, so it's a coming from Tuesday's Oh, I screaming it out wasn't good meeting here is a link agenda. And meeting minutes is also posted, including everybody's contribution as well as the consensus. And I think the consensus, the consensus, and we'll be reflecting, at least in this meeting, what opening things that timeframe would look like, and how we process them, and could also contribute to us. So from from the beginning, that's where I have been crossing at 98.5 megahertz. And it's fun stream to stream without coming to be aware as if taking clock trigger after 98.5 megahertz and those and the record every event, you know crop trigger, right. So that's effectively how it's logically working to working to recover data. And we just do a CBO to aggressive suppression so that data can feed into the data contracts, and only about 5% colletion were actually clearly being passed imposing well actually contains a colletion data, and that well interacted with a dictator provides strikes and showers as well as most crossing wells does contain noise in the background, they are also going to produce a detailed interaction. And the farm to detect farm raid outerwear can be packaged as a hit particular defects or interactions into his as well as time slices in some which will be building to some subsystem for examples. SVG will have two to 10 microsecond time slices spilled into the Stroop window. And then from data validation, well further package them into timeframes and Adwaita. I think consensus it's about a two megahertz out two kiloHertz also in the room, that states as the data will be passed along to the offline for analysis. And discussion found that Tuesday meeting was that so from offline point of view, we actually have a preference now to align the timeframe lens, which is a lens of how a packet of data in the timeframe with respect to EIC beam rotation. That is because for example, when we go so time, pattern recognition, you know, timeframe, there could be a bias instance that at which crossing data, we start to pattern recognition and the witch crossing and, and the price is very small, but there's a prevalence is not allowed to align that buys potential bias waves or any bi EIC speeds date, which are related to the being rotation, which is about 12 microseconds for the whole beam to rotate the whole rounds. And in that case, a small loss efficiency at that level is not a problem. But we really do not want to introduce and extinguish the bias instance that wants to control the freedom of Paola assymetry, merriments sysmac Asante to below 10 to minus four, and we definitely want to about any correlation that could induce a systematical city in that case. So that's an offline Palmview. I'm sure that will be also discussed today from our side, which is read our point of view. Then the next part is even and keen. So, since we are basically marrying every every promising events and whether there is collusion or not, and as a basic primary event team in the author discussion prefer to be sick to look for both 64 Beta being collision clock, which basically 98.5 megahertz clock, and I believe later on the RSA Barbie have distributed us throughout the readout system. And so, you will be our exclusive event kings instance that you will be able to differentiate the average of the beam process. And the secondary team will be a tuple of run timeframe counter and also the beam calculating timeframe, which should also give our team a way to associate it with a particular remke rater for human consequences. Both for human use, and the Reconstruction were also generates a human encounter, like for example, my Helpouts collision events from a particular selection. Were also touch a tagging of human country. So, let me go to timecode on timecode, please, so feel free to raise hands and join the discussion at any time. Please,
just quick question, what determines zero of your 64 bit beam clock counter?
That's a very good question that was not discussed in the Tuesday meeting. And from my past experience with the La Quinta Essen experiment, we just specify arbitrary time to forgive before the first collusion and the set data to be tiny to zero in the GTU. And the CTO keep that number slowly increasing. And the CTU has awesome. PK has an incentive fee copy of the beam crossing counter. So it's a it's a data of any time any any period that they only increase in time and the number of duplicate records.
I mean, at some point somebody zeroes this out. I mean, I'm trying to understand this from the point of view of the readout board. So there is going to be like a zero VCO counter command. I think there'll be
I think, yeah, I don't think we've discussed whether we zero this out, or whether we set it to something, say based on the real clock time,
though, if you say it's a primary key, then it's an absolute value. Yeah, no, and and who zeros out the absolute value, right isn't the
GTU zeros. So there will be a single nation at the start of the run. But
then then it means that the primary key is run dash Decio in run.
I believe in, in,
for example, in in our criminal casing as Phoenix there will never be zero, the only increasing interview, and it'll get broadcast throughout the system. So everybody would have recorded the same copy.
You have those here? I can start
the GTU will send a value at the beginning of any run. Okay, okay, fine, good. That value may not be zero, that value may be derived from the Wallclock. Ah, all right. Gotcha.
I agree with you see, if we transferred our city for these that everyone has lost data,
or the 64, we have to be transferred tricky
with this data. But I don't think we want to transfer at least one or two, the dam or the ideas?
No, it's this is the primary key of an event. That's this is how bunchgrass things are labeled to the offline. But within the system, we've been talking, and I think we're fairly, I think there's a lot of consensus that there'll be a 16 bit length of the timeframe. So in terms of the actual operation of the DAC system, that 16 bits is what has to go down to the audio.
This is understood, I think what I didn't understand is the 64 bit, and I still don't, but okay.
Yeah, that hasn't been fully specified. All right. Yeah, I
think that's what I mean, what is
specified, what do we do know is that the GPU has to be responsible for synchronizing this in the system. Right? And either we'll do something like start from zero every every time period, which I don't like or we'll start from something that's, you know, close to the wall clock time. Okay. And that's what I favor. But that hasn't been decided or discussed really so
fine among the primitives of what the GTO sends out is, number one, start off timeframe, right, which runs at this two kilohertz. And number two, at some point, a 64 bit number, which has say, Hello, this is clock time,
right? Yeah. And we'll distribute that. Remember, the distribution goes through the dashboards, so the dashboards will know about that and we'll have to keep track of that the audio will have the capability to know it but won't have to track it necessarily. Okay,
okay. But just to say Sure, so understood, as long as you are not specifying or defining that it starts at zero upon run start, that's good enough, which means it doesn't have to be zero advanced that it will be a value. Exactly. Okay.
May ask question also directly. Hello, please. Yeah. So. So if I understood correctly, GT will send the comments every two kiloHertz, but will we will be will it will be sending also the comments at every revolution. It's a revolution at the end of a or beginning of the revolution.
I believe that's a true and I think that's something we do want to discuss in Williams talk.
Okay, very good. Yeah. Yeah,
we certainly need to we I mean, in Rick, that's called the revtech. And we will have a revtech.
Hi,
yah, a son fascinated. Has punny service service line. Do you say that to you should not align the timeframe with the revolution? It's funny to me, I mean, I don't know which question I usually I understand that we need to notify each bearish pulsing, which interacts because they were will ever not the same feeling same, having the same number of pot on a on a in it. So we really need to notify each of them. It will be much more simple to do that. If we if we have a zero at the software bench causing of the revolution and then we come to them to the end. To the one turn to something so I don't see what is interest, whoever timeframe which moves compared to the revolution?
We so that's it. I'm
sorry, Jen, do you mind? If I
Yeah, please do it took us a discussion please do there's there's two
counters that we have to have. One is this one that's based on the 11 160 Punch crossings, we do need this and we'll be working with this with the revtech will get that information every every cycle. And that, of course, is a counter that we have to keep because we have to untangle the polarization states. The the timeframe, though is a, this is going to be the package of data that's sent to the offline. And if we if we put the boundary of that package every time on the same bunch crossing, you know, we this is practicalities, but the truth is that when they're doing their initial starts and stuff, they're going to just analyze a single packet. And unless they analyze the overlap between packets perfectly, then what that means is that they're gonna throw out the first couple of bunch crossings of each of each timeframe. Right, eventually, of course, they could make this perfect, but I'm not guessing that they will do that all the time. And if you're if you're doing that, if you're throwing out the first you know, six bunch crossings, every 10 revolutions, you're you're putting, what you're really saying is that you're going to have a different efficiency for the first six bunch crossings of the timeframes. And if that's always correlated with the first 10, bunch crossings of the of the cycle, then you're correlating that to your polarization states, and you're putting in a bias. And I think we want to avoid that if we can.
I mean,
yeah, and there's somebody spying, but it's something you can try to friend. And moreover, you can also use a GMT bench crossing, which are the tools for physics to to put the revolution tick there, instead of the first field, which was in it could be moved by a few. It could be put on the empty page contents,
I think. I think that it would be a mistake to tie everything to the same thing. We're going to have to have two sets of counters one is going to tell us where the beginning of the packet is. And the other is going to tell us where the beginning of the revolution is. Maybe have to think about terms.
Maybe also, let me jump in here before I pass to Joe. So I think I think for Dummies concern by the way can associate the waste crossing to like possession in the revolution. I think GTO are definitely provides the data to do that right so quickly when exploited to that and teach you definitely know how that information can put into ESA timeframe data. And so therefore, you know, for law we can always associate being crossing counter into a system each state and also its opposition revolution. So therefore, it's not so. So it's a go. So by extra passive GTO data, but we can definitely do so the second part of of, of, of a possible BIAs are coming from aligns a timeframe, which is basically the offline time units, which revolution, I think is that, I think, then we're definitely going to recover the Boundary Crossings of funds or unframed before and after open and open and safe, and we'll be able to reuse them about to a fault, of course, but I think definitely can do that. But there's something much deeper and could be harder to eliminate is one of the many pattern recognition point of view, for example, in track in combination to monetary or common founding, why do you start looking for tracks, we are the leader to a difference in efficiency. And so for example, if I always started looking for tracks from the icon, above the gap, all the first crossing, then will lead to a slightly different efficiency for that crossing, and going process a whole view and conquest of the whole timeframe. And the that day effect usually is a small effect. But nonetheless, we are looking for a systematic uncertainty for seeing a symmetry to be below 10 to minus four, and the other 10 to minus four level. That's that's a very, that's something very hard to beat. And in many experimental, which even required even better since 1970, like a four party violation spend to go to attend to mass as much as 17. They actually deliberately to blind as a whole readout electronics and the reconstruction of those of those things state essentially blinded the officer crossing numbers so that we so that nothing is Bicester deliberately against them. And only in the last final state of analysis, we reveal the sting state, I'll reveal. So Kelsey Lambert, and in that case will be extreme case of this application. But nonetheless, here, I don't think we need to go to tend to mass in CES magically control for UFC, but even attend tonight's for level. We don't have to tie the beam rotation with the timeframe data perfect tuned together so that we could deliberately try to avoid some of the speaking. I hope that's another way to state it, which is auto Masato lab. And then let me pass out to Julia.
Yeah, I was just gonna say something that you already said, Jen, which is just that I may have missed some of the discussions in the last few weeks as I've been on leave for a while with a new baby. But I want to, I want to push back against the idea that somehow we will not analyze the crossings that are close to the timeframe boundaries, because we'll have to, you know, access some of the data in adjacent timeframes in the offline world, I think we will have to do that, to take full advantage of the EIC data, and it is I think it is something that we just won't be able to just afford to throw away data for, and that we will have to have this sort of functionality in the offline world, eventually. So we should just develop it from we should develop our software from the standpoint of being ready for this.
That's true. Okay, so, Jeff, please.
So,
my point is that I fully agree with you, Joe. I mean, I don't know that it's absolutely necessary, but I I agree that it can be done perfectly. But remember that our, our actual events are going to span at least seven and probably more than seven bunch crossings. And it's not just the full reconstruction software for which this matters. It also matters for things like you know, any kind of simple monitoring that we do within DAC system just like trying to, you know, keep counters and stuff within the DAC system or have kind of first level QA. So I think it is important to just understand that that might not always be perfect even though it it can be perfect eventually. I think we just simplify ourselves if we don't add that bias and
I think one must understand that the edges of the timeframes will be fuzzy simply because the data is almost all of these a six comes from Pfeiffer's and these files need not be deterministic. So which means at the point in time, when the audio or the ASIC receives a tickle command, the data coming out at that point in time is late. Is five ticks earlier 10 ticks earlier, depending on the FIFO length of the specific case. And the way the ASIC does things. So handling the boundaries will be tricky. And I absolutely agree that you don't want to tie them to the ESCB in rotation. But it will be tricky. I don't know. I think we should discuss.
Yeah. Yeah, can you please? Yeah.
So no, I understand that is perfectly clear. But it's why we could also make this condition from one side to the other in the empty bench thing, because we have one on one empty bridge crossing if I'm on board. And here, we have plenty of time to transition from position from one time. So we've got two disturbing, good data.
I'd say at least Yeah, yeah,
the abort gaps. While they're mostly empty, they're not supposed to have anything in them, they do have stuff in them, at least that Rick. And I would expect that will be true, right? We study the board gaps for things like, you know, background studies and stuff. So we also don't want that to be biased. You should be filled. And I understand what you're saying. It's not as important, but we still want to have unbiased data there.
But we don't have the charter of time payments each emptied.
I understand I understand. So
we can use we can avoid to use that transition time. But use also over empty which causing,
yeah, also realize that the the abort gap position, although it will be fairly stable. If it's anything like Rick, it's not fixed in time, right? It's not something that they will always guaranteed to keep in the same place.
Yep. Okay, that's all all very good point. So I'm taking notes and we're going to have a recording and transcribed and the whole place out. And I will try to make it a means to summarize our discussion. And but nonetheless, it's very, very useful discussion, I feel when we bring up as a definition, which is you and the, the transmission of the city for being countered in something that we definitely want to specify in our coming document and, and have hubs that go into our working group. Okay, so let me continue here. So then on the other next part, next, I'm gonna start discussion. So we do want to have run structure, which is driven by configuration change, like when we change something to configure the front end or high voltage, and tasks that we want to have also continuous radar information on Team antipattern monitoring, like versus what's being tagged on that it's doing that will be done, regardless of where it is, right, it's a start out in. And that also leads to that we will have a run, run counter and run ID and those ads. Since these are tied to configuration change, especially human driven configuration change, that's probably where I'll be underlined. So about one hour. And then, um, the first one, the last one is that we want to have redundancy information to start slow control data, which is another big task and we'll be primarily so database and the secondary leagues, so raw data file embeddings. So that database is coming in for us and the raw data file embedding always associate us local data with the raw data from the fast data equation, so when we never lose them for safety. And nonetheless, I say well, how we want to do database and how we want to do slow control data is still still need that also work especially functional project side. And then I think well, we need to follow a follow up discussion joining both working groups, for instance, how we define the database copying scheme and the data recording scheme into a database. So that's a summary of our Tuesday discussion. Okay, since we already have a very useful feedback, we have found the foundation the single slides so without further delay, let me pass on the screen sharing to William did allow self presenting and testing regarding the timing frameworks. And so really, my hope is that it's okay that we also make the autarco working meeting style in the sense of anybody has points or have questions or comments we can we can join in and interact with our target next time. I hope that's okay.
That's me try to rest
DSC my screen is not full screen but is large enough already. Yeah, okay. That was nice. Okay, Yeah, looks perfect. Thank you.
Yeah, so I think we want to skip this. I think this one, everyone knows, though, we don't spend too much time on this, but I do mention the, the band dance is, is pretty big. If you can't then whatever data into times and as you read hornbeam about that you pick a seven hydrogen with at least one. So, we can use this information to determine the position of the interaction. So, so will
let me just clarify that. So even the banks do have a little slice few 10s of centimeter and they are very small comparing to the big tongue spacing, which you highlight here is 10 nanoseconds. So basically collision still happen at a very small time reaching comparing to the 10 nanosecond time window. Right. So I just want to understand the how your comments, that's
yeah, the comment is, my comment is there, you had this lie is really scattered like this, if we measure the time of a mob precise than this one, and we can't determine the addition of the
circle. Yeah, so also, there was quite a lot of studying that half group regarding that as well also recurring tracking to the trim the word explanation in combination of timings, so that we can determine the actual happening of the T do well. And so just want to add a piece
of information. Yes, the timing can also have a interaction point. Yes,
it's very important. Cool. Thank you.
Okay, so the view structure I think everyone knows and then our overall structure is also ours. So the the otherwise at what message? Do you have we want them to be simple and so I think it's up there and a better tool for them we need one leader that wants you to then go to that Ambassador and then the Democrats with the ideal use of like a countdown to them or multiple they don't know I feel easier to do with data one data
let me just pause here There seems to be no common sense out of it as a novelist, I just wanted to provide my support. That's a single layer sounds also are useful to me. Yeah. Okay.
Cool, please. Okay, I think that details I don't think we want met him too much.
Okay, I think it's acid everyone can thanking him for the why the Cockatiel simulation y is bigger or the run control. So, the clock distribution, I think our goal is a beam crossing is aligned. So, what that means is
on the front end board, then we know is a fixed phase, where the to the to the to the beam cars, and for the drone control is one of them to be be more obvious our phase diagram for them is is is brown and board thing you know, is a witch beam causing this command happens. So good to have details in the lab, we gather the cross platform. Fans at GCU added to see if we use liquid buffers, and then we can make a better place for the fixed and a low detail. And then we use a dedicated we use a dedicated fiber for the cloud transfer from the GGO to the Danville and then this one should be also pretty fixed. In order down board, then we encode our data and then through the MTTs don't want we also want the data to be in a fixed space relative to the reference crop. And then on the recovery on the RTOS side. We will that'd be how the clock synchronized with that data. So that I can see my mouse.
Yes, yeah. Yeah.
So it's acid some some numbers, then we'd be trying to get the vessel how we achieved this one. So similarly, on the on the land control is like we sent it one control from the GTO and then passed on to the, to the dam, and then to the ideal. And we want to implemented the dense measurements. So all the audios and get at the command, and at the same time. Let's go next page.
So yeah. Can I have a question on the slides? Missing? You put a cheater specification well on the slides? And how about stability? Like power armor? Research stability? Yeah.
These are the stakes. So
what I mean is is a specification, ourselves, Quinto Tyson is about a 10 pico second, and by the word he's needing
a wife to be less than one cent per second.
Oh, yeah, that's exactly my question. Fight for five pico seconds. First of all, it's a very, very good. And I believe every subsystem in CUDA, if we can achieve that every subsystem would work. By default, most subsystem, we don't require a factory to second step he
doesn't. And so
my question is, what is driving factory code? Second requirement is also
tough. Yes, yeah. Okay.
So in principle, if it makes 10 picoseconds, a second Cox dish shoestring would work for almost other Epic experimental, except that's half. Is that a good summary?
I think yes. The other thing somebody told me, do you want to do that? Yes.
Please? Thanks. Yeah. Okay,
great. So just so I think then the mind starting to catch up, okay. It's a sensible, next to Jeff, please. Yeah,
William, thanks for this. So, and that same area in the audio, where you say phase is stable, but not fixed? Do you have numbers attached to the stability and to how fixed it is? The stable is the 10 pico seconds or the fixed is the 10 pico seconds, currently?
Okay. This one is stable enough because of the data. When we do that we said and then the see how 126 amino acid unit interval camp, but we can fix this one. The best data in that? I mean, when I saw that as a result. That's why I see the automatic reset.
I'm sorry, I didn't understand that. He
is working on it working. Working on it. Okay. Yeah. Okay. He will show you about this automatic resets and so on. Isaac, let me just kind of explain the time of flight. So the gentleman he's showing this is random jet. Okay, gosh. Okay. Everything else? Which is non Goshen? Are these various? How would you call it steps. So face steps, random, not random face steps due to resets. Okay. Now, the assumption is that, once you have a reset of the system, the face stays the same. But for simplicity, you don't want to track this phase change, you will rather have this whole thing constant. So you if you require that the phase change is less than five pico seconds, then you don't have to worry about the phase change. If it's larger than that, say it's 30 pico seconds, then you do have to worry about the phase change. Which means you have to dig into the data and figure out when this happened and you have to keep tracking it and it gets really ugly really quickly.
We need a calibration probably on the run by run level. Worse,
I mean run by run. Absolutely. But the problem is what if you do auto recovery and recover the parts of the system then you are in in a nightmare. Right? So for every reset, you need every reset you have to tag the reset at which time with occurred digging the data, it gets ugly, the hope is relying on the T ceiling protocol is that this thing goes down below five pico seconds. Right.
And if that's not true, then we have our work cut out to figure out how we're going to do this calibration correct very, very quickly.
For detectors, which care and time of flight is one of the more time of I mean time of flight like detectors, which we which need time and precision, you know, better than 20 picoseconds. You can have a jump of 50. Right?
Yeah, call it call it ACL gates plus hp hrpp. Ds. Yeah, yeah.
We'll do consider alternative district class distributions, I suppose. A tough like the title.
That's sure. Right. So I mean, the way back when I mean this PPR do that, that William will show. We are also testing the direct clock distribution. Yeah. But there is there's a third component, which you should also know and that's slow variations of the phase, depending on external circumstances, for example, temperature. Okay. I don't want to dwell into this. But in essence, you have three contributions, you have the random gauche jitter, which is three, three pico seconds, we measure the everybody measures, five pico seconds done. You have this jumping phase, which happens, we set to reset a problem, supposedly solvable using Xilinx primitives. And then you have the slow variation of the jitter due to temperature, which, according to some of these measurements is not small. However, I believe that our variations in temperatures are small. And by the way, that is one of the reasons we would like to have runs, which span only an hour or so.
Right? And the problem is that that final version is the one that's the hardest to test in any reliable way. Because if it is temperature dependence, for example, we have to have it in the actual setting.
Yeah, but that mean, the temperature doesn't change the detector unless it's day night, unless it's summer winter, you know, those crazy things. So that's why you would like to limit the runs to about an hour I say, you know. So you can cross check calibration within around, but you don't want to calibrate every seconds, right.
Okay, yes, thanks.
Yeah, I think also for this critical critical system critical to time distribution, it's good to have multiple redundancies. And I seen in our calibration, we do want to run continuous calibration, it's not going to be it can be like every 100 pico seconds, by the way to at least going to do it run by run, right. And there's also, it's also good to carry the option of directly across distribution as a backup option, it will make us a review much easier. And so by I think, I guess my comment is good. We're also building redundancy. So even when failed, somehow, we still are able to do the FedEx.
Right. I agree with you. Yes. Or I agree to all of you.
For all the reasons.
Great cook sets, tempo. That's why we're helpful. And I don't see any hands raised. William can ask another question. On the on the last slide. Sorry, last slide. Yet, he's actually here. So my question is, when you will see statutes on the right side that you means that actually is lower to upper communication lines, for examples of them to GQ communication. So when you'll see when you'll see statutes bathroom sink, X access, etc. Yeah.
We wanted to know that if anyone wanted to be a pastor, okay, my pathway is to fool Am I saying all that kind of stuff? Okay.
I assume we don't get to discuss the synchronizations some some way later in the slides. And that's true, right. So I can ask my question later.
Yeah. Okay, sounds great. Thanks. Cool.
So really, yeah, so there's 100 respective.
Yeah, sorry. The other the other part of my question. I just want to clarify when you say hijacked the GTO control, what you mean is that there's separate modes for the configuration.
Know that the wider definitely how the full system then we can have the the GTO when we transfer the control from the dam to the ideals We have a city for bids. And then we have a visa for the GTO. And today it will be the for the dam. So they can do them separately. And then what we have is and if you don't have the congeal for a subsystem, and then the demo can can can take over lack of behavior, like GTO and adapt, then control a cyber system. That's why I needed to hide Okay,
I see what you mean, you can standard you can run standalone. Okay, good, good.
So that's the buffer control level, you'll be how pathways on every level, I do them in the deck. So normally, the data just flows from bottom up. And then if, like, some see, the dam was a puffer was like, four or close to four, then became backpressure, the ideals and then the theta star ideals. And that's it as a simple but if we have four turns, then is we're going to lose data. And then try to GTO have a chance, like a to see, okay, everyone stop, they're down to the data or something. That's a try to keep them again, synchronization Billa. If someone loses data, and there is, if they lose their header or thriller, then they go into auto asleep.
So, so we'll jump in here. I think that's that's where we're a good practice to have a global coordination of that pressure. Meanwhile, I think I mentioned the redundancy parts of that redundancy part of this out. So besides inclement this icing, were also going to do a QA of timeframe by timeframe base. And that is to try to collect a riser, every subsystem needs to have a complete and occluder timeframe data. I think Alex do that before everyone has a title for it. And the if multiple subsystem fielders and we're going to swell that timeframe for analysis, and the backpressure. Actually, can you help us to put as a bad time frame together? So everybody's dead at the same time? So that's a way to have random sparkling. Also that the timeframe our system?
Yeah. Okay, great. So, the other one, I think, the key to I think going to broadcast the hobbies or content counter to your article, so for the subsystem to, for the damn idea to text synchronization. So
and also waiting that's so there are a few way to keep us in foundation. Why is that a subsystem like than an RTO keep a counter of being crossing and also keeping counter of arbiter crossing, so they keep accumulating them when you do a synchronization check. And another way is just to broadcast that information from GTOs. And when we do broadcast, the problem, as you point out early on, in this meeting, is that we can now broadcast the full bits, but nonetheless, we just need to broadcast enough number of bits. So that in a window of checking in offline, for example, the neighbor rollover will be sufficient for example, for being cross country, we only need to broadcast for example for debits and instead of 64 bits from TTU and found them to audio we only need a for example, less bits accessing bits will be sufficient to accumulate are counted for any variation of FIFO lens latency delay rights and then so therefore it starts with doing counter on the dam and audio another possibility for us to do it will be simply just broadcast a lower bits so that's a dumb idea only to receiving now to do a cheating
Yeah, that's why we as an add to this one as the key to broadcast the other counter that we can program at a file base and then do that okay, that is on the unknown, and pm cause number. So, like a Buechler number of file data all the time. And then we already broke has at least one So when should you broadcast then the RTO knows it will be difficult a number of file and then we have this information you pass but we still can check Yep. Uncle, please.
This backpressure business. I mean, this is very complicated. And William can't do it justice in five, five bullet points. Just a point while this is on the screen, William, you say audio will hold data if the dam is busy. But the audio does know that the dam is busy, you have to build in a back propagation mechanism from the dam to the audio telling the audio that the dam is busy. Yeah, this is something we discussed a year ago. Yes. Right. So this entire thing you have to find places in the entire retail chain, where you have overflows?
Yes, so that's where we do the overflow. Yes. Right.
So the important part here is that you have to know what you're where you have to know that that an overflow will happen. So and you have to mark, you can't just drop data to the ground, which is what would happen otherwise, yes, it is.
The ideal thing that happens to us, and then you can choose how to use color data, or the buffer overflow because that perform a damn
right but can discard the data, but it has to say which data discard
Yeah, then see how to how to record this one in the ideal data killer. To see a house. Some data was awesome.
But this this protocol, let's call it x on X off type of protocol, which we didn't start, you have to build it into the dam and VRBO. It's not easy.
Yeah, definitely details that we need to work on. Right.
And that's a possibility. If I were in here before giving to Joe as an RTO always helps me to get data and then have to receive it. But that buffer could be full and the and the exotic case of Neil high watermark. And in that case, we are sending a PSA to GDU saying that we are going to have incomplete time slices. So let's pause everyone. There'll be another possibility right. So incensed that. I do too. Now to know the powerful, it's always sends data, and then always receives them by making sure there is a sufficient buffer after receiving. And in our current implementation, as far as I think the whole is, Remy? You have? Yeah,
you will have so many fibers in so many cases that you might end up being that 100% of the time. You cannot do it by a global busy, you should not do it by a global business. I disagree, right. And by the way, I mean, since we are designing while talking, I mean, the way to get out of this is that the dam sends data to the audio all the time anyway, because that's how fiber protocols work. So it's an idle token if you want. So you can have two flavors of an idle token idle token, meaning go ahead, and I do token meaning stop. It's similar to what very old Rs 232 communication used exon x of protocol. And it's just an example. But we need it. The the implementation of this whole thing is going to be tricky. That's my point. So
yeah, I certainly tried to evaluate, what about the trickiness of them audio is a propagation right. And that is just to provide a large bar for so it's kind of dried out and many of the slowdowns. And if audio is busy, and they try to hold on data, I do mean I'll have many prefer to hold off the data anyway. So is it better just to send the data out as fast as can't? blindly? Blindly Yes, it's like a decide to UTP sending data. Yeah. And
so how does the dam know that data was lost? So So Dan has to figure this out?
Yeah. In our case, that is that is always the receiving data was he can have enough power for to putting the data in the buffer. That is critical statistic. All right. All right.
All right. We don't have to design that we don't have to design. We don't have to design it right now. But just you know, we need to have such a such a detailed analysis of all the possibilities. And I disagree that you have a global that you have a global basis that I disagree.
I'm not sure I'm certain globally, but I do agree that we do need to design this out. It's tricky as well. Okay, Joe, please.
I mean, we had exactly that discussion about a year and a half ago and that's assuming without workshop and I think I presented On a scheme of how to do that, which is basically that this periodic punch crossing signals, which you know recalled in our lease, heartbeats they are qualified in early. So you have a heartbeat accept, and you have a heartbeat reject. And so basically, when you, as an audio receive a heartbeat accept, and you know that you can send a time slice, or a bunch crossing or whatever, with data, if you receive a heartbeat reject, then as an audio, you know, at this point, you should be sending an empty packet that basically says, you know, this was a heartbeat reject. And here's the time slice that I'm responding to. So this is the way we have it, and at least basically, and maybe that, you know, since you already have, you know, this periodic signal for synchronization, they could be purpose that one as a heartbeat signal, so to say, and then also qualify it just like we did analysis.
Maybe I tempted to that was the time slice the oh here comes to this page. So the TTU standard. stimulant that is that as they started Risa, basically the start with our stop. And then we can also send a stop to everyone. I mean, every idea was going into the sea with that same command. So it's so similar to the we'll talk about about the heartbeat.
Billy looks similar to me. Yes. And yeah. Joe, can you confirm? Okay. Sorry, Joe, I couldn't hear you.
I didn't know what are the question back to me somehow, I've been Yes.
So. So this restart to start restart. And the stop signal looks just like is a heartbeat flavor of accept and reject
favor of it. I mean, the whole point being that, you know, there's not really a you know, a direct global signal like Tonko doesn't like that, you know, that you can on a heartbeat by heartbeat basis, determine for each audio, whether another should be sending something or not. That is the whole point.
And this could also be implemented per audio, right. So you don't have to be global.
I think this one out to be a global from the TGU. And then to ascend down to
what I mean is a similar scheme can also be implemented for FedEx or them to RTO so that we could manage that it will be easy to
see the otherwise allied to the RTO timeframe, return on year to be uniform. I mean, what I mean is what occurred at the start of that one, for example, it started out at one thing, you could have us, that's a five to a microsecond. And then at the end, all right, and we could be like one millisecond. That depends on the luminosity or the data recently tasked with is that just the timeframe?
Oh, sorry, let
me just ask a question here. Before I pass on to Jeff, when you say time interval, exactly the timeframe glance Oh, it's okay, I do prefer it to be fixed interval, regardless of the data content. But now that I think we can discuss that later. But thanks for helping me understand this.
So chair, please. Yeah, I just want to make the point that I think this is really important. Although it can be done in maybe a little bit of two steps, we have to understand what we're doing well enough to make sure that the hardware is going to support whatever that is. The other thing is that I mean, this is really complicated, like all of you have acknowledged, and we need a strategy for how we actually do it, because I don't think we're going to get an I don't think this is gonna be finished until we document it, and then ask detailed questions about the documentation. But I don't think we can do it in this mechanism, because I get confused by what people are saying. And I don't think I'm the only one. So I think we should defer this a little bit because I think we should get through Williams talk what he's trying to say. And I think that's not this But at the same time, I think maybe at the end of this meeting, we should discuss how we can make progress on this question, because I think this is actually the core of how we're going to run. And it definitely impacts what the hardware has to be. Thank you.
This one is at the center from the GPU. So if you can do whatever the requirements are, that is,
well, yeah, but we have to ask the detailed questions. How does the GTO GPU know what's required? We have to go through and ask a lot of detailed questions about every possible scenario. And we can't do that in this meeting. But we have to have a mechanism by which we do this in the next month or two. Yeah, yeah.
So let's go back to the floor of our town because we MPs
are on this page. So that's also really complicated. So and that's okay. So I think that's a person that would be how would I teach you from CTO to a dam and we want to transfer that ATP to have control data and dedicate a card, not a POM file, mirrors and add to add another one is Faizabad from the dam to the GTO. So this one could be a special could be user tool as a pointer if there are any faces that you wanted to add a fiber and these pages are temporary change affects the fiber.
So can I ask a quick question and up so that says, so it's a high speed link be from them to GTO right. So the tissue in this incarnation wouldn't be able to would have to receive about 100? High speed virus in for example, TV by receiver? Is article that standing
Yes, or no? Yes, I kid you don't. We don't yet, like too many controls, right from the GTO to the dam. I think it'd be nice to have enough government as well. Very encouraging. So it'd be essential enough, and then we may not necessarily near the, the, the TT MTTs. So, that's, that we'll see.
I see. Okay, um, I think I'll probably have something later but yeah, principle clarification Yeah, basically
the damage to the ideal is yet when we're doing the MTTs and then we uniform the dam to ideal we can assess where we stand downward that is, how the pseudo clock or the or the control control mode, enter the idle mode from the dam to the ideal. And for the sardine can from the idea to the mtgs we're also similar that could be how the status mode and we have the data mode. Let's go on to too much details and then test the hobby to the data, Max them and then send them Okay, this one of escape, okay, yes, I'm a test. And so, my setup is this one is across. So as is possible, we call them like a reference. And then we send it to the FedEx card, and then test the RTO of big ideals. And then this is a clock cleaner. So that's what how do we how do you measure them? Oh, this is the signal structure. But we measure them is level we how the UFC comment platform or GPU clock, then here we use this one as a reference. So that's the coordinate reference clock and then all the time but then, say you encode you the data or the MCG output. And when it's a pseudo clock, then you look also look like a clock or ideal, then we will receive the MTG alpha then we recover the clock. So this because we cover the clock If the measurement well I do using the scope is to add to the signal measure the ways of that difference of two signals. So that's means so that's we call that the fifth difference tutor means is the uncertainty of doubt with the measurement test Have you tested the true definition here so, yeah,
so, really assume in the same setup you can also mirror the results of stability writing since it will reset the link and see whether the start up with since the place on the scope
Yeah. So, the other way you measure to clock into signal analyzer there tend to be the scope of use this way to measure it to like yes, thanks please. Okay. So here is the ideal decoder clock and that's the positive ways and negative ways and the period that we can see the ways that okay test and measure you can always better is like more than a microsecond range. So, we do the difference measure the ways this wire drops and two of the clock theater and a two horsepower 2.67 and the ways that you measure the teeth the face and it's very stable so as SOSC here to tell you that as repeated seven is a very stable like I need to a lot of hours on accumulation of Gnuplot is stable and this one is after the cheater cleaner and you get a similar result this one says it before the cheater cleaner Okay, so that is content that we covered the quarter after the reset of stability so every time we you you do a reset then you are we can see that way that was a scattered all over so what am is the fiscal commission or one that is even number of the Union interval that's on our clock is 126 on seven Chica seven but what we can do is you could research investment why then until and then the space fall to three determine the one and then this this visitor can be automatically event let them default to that one so that we can do so that's why I say is the face it is I mean after we said and then is it stable in the cheap setting ranges so I think that's no problem with the contract the o'clock the next one
So William can ask a question here. So just to make sure I understand. So you want to so if they say so we we set it to the logical end until we recover the predetermined value well I think our scope we can do that how do we do experimentally? So in that case, we just have ideal just help them and how do you know whether that'll reset it's good to make sense?
Yeah, okay, that's the one we did in the then go to the ideal here. So you the ideal and then being holida recover the cloth and then we have the pseudo clock that we have found out about for here and then both them speed to phase difference detection logic is CDA and control that reset. So if we get into the wild we we want them with Nevada with them before we get on with him. keep resetting until we get to that blue one. Sounds great.
Yeah, cool. Yeah. Thanks, guys.
Yeah, thanks. That's what I do the author has the head
and how many recited did you have to do before we can have recovered? From the face face looks like about 100 reset?
Easy. Exactly. Okay, right. Okay. You're liking them? It could be like 1000 Awkward letter maybe that's a one or two. So
email ourselves and we start to take a second way
yes or no, that's the best Another question is that you have the FISMA and why I use si 95 or 5394 then it takes 200 milliseconds to be stable. You have a name for that and outlets will be stable then take about one to seven. So that's that's an item like this. But we do have other options. I see. Yeah.
Cool. That's very very helpful for me and let me pass it on to Jeff.
So the pseudo clock scheme that you're talking about does that work? If that doesn't work if your power cycling only if you're doing a reset is that true?
The power cycle or after they said or anything like that? Then we we can send it the pseudo clock rest and alanda phase Yes. Oh,
that doesn't work after a power cycle. Okay, yeah, they do. I was misunderstanding that pseudo clock is directly sent over the fiber yes from the dam just like in a different mode. Okay, I get it. Yeah, I thought for some reason I was thinking oh, this is an internal clock or something I was misunderstanding completely Thank you. Sorry.
So the funnel down into the ideal recruiter send a letter ASAP is out at this Why do you send them a lot of this PDF hasn't as a pseudo club
I get it I just didn't understand the first time thank you
as the other one is the artist Okay this one is the MTT output versus the reference clock, the tempo on the dam and this one I don't have much luck because certain cloth from the previous one fear is okay this till today is the is the reference clock and we are zooming in too much. So, you cannot see the full clock and this one is to is to transmit the CSV data from the template and this one is the PCs clock out. So, every time we do a reset okay, if you asked him I said some
Yeah, okay, if I had to clock or reset for the PLL we said then this one and this one to jump around. But after the PMA, a synchronization that is in stable in wine during the entire world, but is still coming out rather like up to level 27. To test the next stage, I don't have time to how it's time to put this into a pod. Let's let's test some numbers. So, for us to reset it and this one could be anywhere to dump a lot and then after this one, then you fix the to about 30 puka segments range and but the phase difference between that CS out and the TTS clock is way too small. So what I mean is this, this and this is kind of related. So as I see this one, he said so that's what we need more to Do is on the on the spadix card is that for understand the why this to have a difference. And then the other way is to implement it a measurement of the cheers out for the order to clock out. So we can also keep resetting until this one followed to a predetermined base saved from the reference clock. That's what I'm here to do. No problem I cannot do on the Casio Bada is the measurement of position going to be about a 10th of a second. But here we wanted a measurement position to be number one pick a second or last range, so we can fix that one to the one trick setting procedure. Okay, this one is the current set of means dedicated cloth straw work. And as I see as the next way that we could have achieved, ideal and then how to download. So that you might want to do next either use the FedEx power to the ideos. That's more realistic. Theater we use 150 power the top ideal or the median, the perfect type RTOS. Okay, here is the first thing you want to do, I think probably what you saw and then how it defined the command and implemented will protocol. And then next one go on to the key to prototypes for systems.
Okay, thanks. We
have a lot of information. And we already have a lot of discussion during our talk soon. And so let me pass on to GSS for cups.
Yeah, thanks a lot for this. William, you did a lot of work in a nice presentation. The question I have you gave all these connections between the various, you know, the dam and the GTU and the dam and the RTOS with the data volumes or data sizes? And the the speeds? Do you also have the latencies for these
latency that depends on the firewall lanes, and then we want them to be fixed latency.
Yeah, so I guess that was my point, I think we need do we need to go with like a worst case latency for any of them? So are we talking about what seven, eight punch crossings should be our defined latency for everything? Because this is the other thing that goes into all of these, you know, backpressure protocols and so on and so forth is if we have a latency that we always have is fixed number. We actually have to account for that when we're talking about a you know, fast, busy
kind of use of fiber that's half nautical second.
Yeah, which is? Yes. So I can probably move on. So we want to set a latency on the order of the microsecond. Yeah. Okay.
Yeah,
I think you really want to be too high watermark. Right. So essentially, allow for delay.
Yeah. Yeah. You don't want to be real forward? And you say unbidden. No. Yeah.
So I think I think what we have from this talk already, along with that latency number is like a really good description of what our capabilities of each of our communication links are. And then we have to build this kind of protocol that we've been talking about the, you know, how we do the backpressure based on those links, so everybody has their ideas about the back or the backpressure protocol, and we have to write them down and then say, Okay, does this work with these numbers? What are the specific signals we sent?
Well, that's, that's the page that we want to use.
Red Rock On phase one, we say, Okay, what do we need? How we divided them into eight beads?
So my question is really how do we make progress on these configuration protocols? Because, yes, you know, what I can tell is that everybody has different ideas. And probably everybody's ideas work. It's just what do we choose in the end? So we have to have a procedure for doing this. Yeah,
that's what I say, when we have the, or when we establish the ultimate data, then we got to have this one set. So we can still improve.
Right? Do people have any ideas? I mean, I guess my concept for this kind of thing is that we go to either small groups or individuals that do some documentation, you know, document a scheme, and then we try to poke holes in it. But people have better ideas and this or other ideas are
so I mean, now's the time really to do a design and and the kind of post specification rights and so on, I think willingness How does last last topic, and incent William William Perdita, England cookie winter compacts over the slides, but they still ask for details. So my feeling is, is a small group meeting, I'll this will be like a work fest of just like mounting to focus on this are we have Tableau meeting Working Group in a larger setting. So that, for example, we discuss as well specific topic for that meeting and really flush it out and make consensus from and I think either of us will do the work. And we're probably I probably have a slight preference of a small group discussion. And then we bring back to the whole working group or for like, for, for consensus discussion, as and
that sounds good to me too. But maybe it's a precursor to that we should take, you know, this talk those those relevant pages where we were discussing the, you know, the width of the bits and the rates of each one of the communications and define that as a specification. So we have like a starting point, whatever our protocol is, it has to fit in this, this scheme. So maybe we make that decision based on this list effectively in this talk. There's a starting point.
Even for that, right, so for example, on Section Three palm falls. That's the GTO to deimling. Right. And, and so we could you go to 3.4?
Yes, so another interpretation, I think I'm currently, what we're using as Monica is for GTO to dam is a high speed link. And so it's actually as we went MGT. And it's as the duplication for a signal for every subsystem, for example, rather than a single, we are the only two entities talking towards our whole TDC DM fleet of 24 fixes. And the inflammation also is a high, although it's many high speed link, it's only to not to use out matchup mg T's, and then to go to high speeds linkers. And you have the advantage that you can actually afford to broadcast the beam crossing country, you know, reduce it, and also RB to counter so that you don't need to ever worry about a DM go out toxic, and then the same number is embedded in data and it's very, it's very, it's much nicer for offline data, you don't need to worry about it, I'll simply just grab the number and that's the same number on the GPU. So that's another actually variation of the hardware setup going to later going to may affect how we want to design the protocol to so therefore, I think that's why I think it would require a more focused discussion right on both the hardware link and on the protocol to make some progress and there'll be four before we discuss the ins are working group again, I'll at least discuss that particular specialty tongs in the working group meetings.
Okay, so you're saying
you're saying you want a higher speed link than this?
Yes, especially for GPU to add to the option right.
So so effectively, we have to discuss discuss this the synchronization so we have to discuss this kind of purely specification of the links stage and get that documented before we do much else. preferably, yes. Okay.
Okay, thanks. So, um, so we do need to make progress, right. So as a, as a proposal, so in the, in the in one set of very recent upcoming meetings, we could make it a better way to get discussion is aimed for a small group that we could really dig into the detail without bothering everyone else. Nevertheless, we'll be in the working group meeting slots. So everyone else who wants to join this and join? How about that way other than what makes our meeting just exhausting the link? Protocol? And also how we implemented links?
Yeah, I think the otherwise when when we heard the PPR ideals I think we will implement some pre version of that, because we want to control the the ideas from the basics, yes. Yes, applying to is fascinating.
And we should we should take that as a starting point. Certainly. Yeah.
And then you spell it out in the the otherwise the slow control, like Otto, see colleagues to that or that kind of stuff? Yes.
Okay,
so it's already one half hour, I think it was very useful. We'll set you up by March to to press enter to press enter this design. And I think we do want to have a follow up. I think the convener were announced that we should form it Bobby, where's the is workfast type of small group discussion, we could do it in the coming working group meeting time. And meanwhile, we do have a recording of this meeting, I will try to do a summary after today's meeting. And I just want to send William again for this very nice work and also for your hard work to put them together. Okay, I hope that's a fair summary. And any last words before we depart? Yes, this
is Fernando. I, just a reminder, that next week, we'll have a joint meeting with the NPD Group regarding the salsa specifications and needs requirements for the MPGs. So you might receive an email from condo or the NPD. Group. With a very soon, thank you. Thank you for natto Okay,
any further questions, suggestions?
Use it so many people working on the offline analysis or the RTO? Is there anyone working on? Like the online control or something?
Um, so I guess it's a it's a now that so many people working on the offline analysis of RTO data is a we do have computing Computing Group. Right. So we're working offline software is to digest data from the host system. And it has a discussion in the stream reader, the working group. And then I think I think our question is about what about the online controller and presumably, QE? And that's a good question that will be done by us. Right. So I said, I hope and I think y'all have already touched the sample upside. But now that I said, in our working group, we need to work on it. I don't think it's a fully materialized as work package yet. But, but we do need to we're doing whether Jeff and Fernando have further comment on that, too.
So no, there hasn't been a lot of work there. Some of that depends on the hardware. And some of it, you know, in terms of the project, most of that work is scheduled for occurring when, when, when construction starts. So in middle 25 right now. That's it, we can certainly discuss what it's going to look like though, that doesn't preclude us from trying to understand what we're going to expect
to do. Yeah. Okay, sounds great.
Okay, if there's no more common solution, and let me cycle William again, and please do expect a way you come back to this topic again and try to make especially to shoot documentation in the near time. And thanks again and talk to you next week, which says I tried making waves as a multiple pattern detector group. See you next week. Thank you. Goodbye, everybody.