Okay, so just to give a quick overview, just to revisit the overview we're going to talk about, we're going to cover two topics in the next 20 minutes or so, we're going to try to be quick on each one just to, you know, we can always do follow ups, but let's try to, you know, just use the this time wisely and just kind of zoom in on a couple of key topics. The first one will be collaboration, the second one optimization, you know, I send a few thoughts on how we could position you know, within those talking about different things within those areas, but really just want you to speak freely, because in the end, the content that we put together, we want to have show unique voices from inside the company. So I don't want to it shouldn't be too messy, and all that. So I'm just going to go through, I'm not gonna go through all the questions that I sent. I think I'm gonna pick a few. And then we can kind of go from there and see how we do on time. Okay, so let's start with collaboration. I think we'll let's just start with kind of a, some high level thoughts, what are the technical or the key challenges that stand in the way of collaboration for large enterprises today and let and specifically, you know, if you can give a high level there, but then also specific to the ML ops engineer, and the challenge for that role?
Yeah, so maybe, maybe, before we go a little deeper into collaboration, right, I think there are multiple kinds of studies out there reports from folks like Gartner, Forrester, IDC, etc. And the numbers kind of slightly vary, but I think all of them are kind of saying that less than half of the models that people build today, in large enterprises, I will make it into production. Right. And so I mean, they actually get deployed, where they actually running. And even among those 50%, that even make it I think, less than 20% Actually even turn out to be useful, right? Or produce the business outcomes that they were supposed to, right. So like, overall, one in 10 are being successful. And then the reasons that these folks point out folks like Gartner, IDC, etc, I think they're kind of bucketing them into two areas, right? One of them is what they call cost slash infrastructure issues, right, which is where we might talk a little bit about how optimization helps. And the second is what they overall call like process, right processes is a fancy word. But the idea generally being is that modern enterprises, all enterprises are complex, right? And this is where the aspect around collaboration becomes kind of really irrelevant. Right? So where does this complexity come come about? Right, so to answer humanas, question around that, right, so. So what we have learned, and this is not necessarily completely new, my background is software, right? So if I'll make some analogies on how software developed over the last several decades, because there are actually a lot, a lot of good analogies, and a lot of a lot we can learn from from that. Right. So the main thing about machine learning, which is not too dissimilar from software is that there are multiple stakeholders in in making a model successful. Right? So almost all cases there is there is the business, right? So in an insurance company, whoever is trying to bring out a new insurance product or in a financial company, what's what's what's the new credit offering, or whatever else that's coming up, right? They are typically the ones who have an interest in creating a new model and taking it to production. Right. So that's the business, then there's the actual technical team who might build a model. Sometimes that may be inside the business, sometimes they may be outside the business, they might be a central organization that builds these models. Right. And then something that's relatively unique for for models versus traditional software, is that as you guys are well aware, a
lot of the AI models that are getting built now, are, are, are somewhat not easy to understand. Right for for folks, and we can talk more about what that means. But these are complex pieces of things that are not like traditional software where somebody could actually look through this. They are not deterred, they are not deterministic as software because they are based on data and data changes, etc. So all of these organizations are now beginning to realize that they need a function that essentially is like a risk function governance function. This is fairly well established in organizations that have regulatory oversight, like financial healthcare, etc. But it's not as not necessarily the case in other industries. Right. So you have all heard stories about even from the best companies or the largest companies out there, Apple put out a face recognition thing that didn't work great. Google had the same issue. Amazon put out a recruiting model that was quickly figured out was biased in various ways, etc. Right? So for any number of reasons, be it reputational risk, be it financial risk, be it regulatory risk. There is a there is a function emerging around this right. So that's kind of the third stakeholder so the business, the data scientist, the regulatory function, and obviously the last but not the least, the IT organization that might run and operate this model through its lifecycle. Right, so this quote, unquote, is the challenge the collaboration challenge, right? How do you make these different organizations, different folks with different skill sets, in almost all cases, they're all comfortable with their own set of tools to do each of those things. So the person who does regulatory risk related work have have their own tools, the data scientists obviously have their own tools. The IT folks are their own tools and how something gets deployed on the infrastructure, be it cloud or on premises. So this matrix continues to grow and grow very rapidly. And this is what is real collaboration issue. Right? And so the real question is, what what can technology from a company like? Why did I help address this particular challenge?
And so can you continue on that? That was one of the next question. So how, how does a how does how is via an AI solving this problem? And if we can sort of hone in on a, a persona or a couple of personas, so that we can keep it you know, make it very specific? Tangible?
Yeah, so let me let let me take my first analogy to software, right? And I do multiple of these and doesn't mean, we have to write about these analogies, just let's let let let lets me communicate that a lot easier, right. So if if some of you are as old as I am, I think men software used to be written earlier. It used to be what we call waterfall, right? So somebody would write the software, wherever the software engineer word, he or she would then hand it over to a person who is the IT person, let's say, and they would then then deploy it and operate it. And then if a problem happens, some kind of emails or whatever else might happen, etc, right. And this is the reason it was waterfall, not because people didn't know about how to build software faster, it just took very long. And every time you change software, this process needed to be repeated. So rather than doing it on a continuous basis, because I'll do it once a year, right, that's what really created this tendency to have slow releases. Right? And clearly, that is not the case today, right? We have technologies, or the people are doing things like DevOps, right, and they are additional things. But the whole point being is how do you put technology in the middle? Not, you're clearly not removing the people who build the software, the software developers are still there, you clearly don't are not removing the IP folks to operate that. But you are making it smoother using technology to do this handoff, not just the hand of one way, but having it bi directional, right. So you move the software to production, and then production has problems. How do you how do you surface those problems to the people who have to fix them? Right. And there are other things that have happened over time, like cyber was not a big issue a decade ago, it's a huge issue now. So there is an equal in technology that has emerged to add security on top of this, right? There's another thing that has happened, like Cloud has come up was Wasn't there a few decades back? How do you actually cloud brings its own challenges, right, things like that. So ML ops, machine learning is very similar. So how do you actually now put technology in the middle of this collaboration challenge that I described earlier? Again, with the intent not to remove the people, right? DevOps did not remove the people but to make this collaboration much more efficient. Right. So, so what what is what is what is what are the? What are some of the few things that are different about AI and machine learning? Versus software? Right. So that's why in the case of software, we had DevOps, why isn't DevOps sufficient? Why do we need something like ML ops, which is what Whina is bringing to the table? I won't, I won't find out everything. But I think it just the complexity is much, much, much higher than software. Because one thing that is not as important in case of traditional software is how you manage data. Right? How do you work with data? Right? And that's one thing that requires as a newer capability that doesn't exist today. Right? I already talked about this aspect about risk. The traditional software, people don't talk about risk. And one thing that's very unique about, again, machine learning versus traditional software is that you can put a model in production, but it might find you might find out a month later that something has changed, right? Like we all familiar with what happened during COVID, like my nest thermostat stopped working. Because it was lower the temperature thinking that nobody's at home, which was okay, pre COVID. But no longer true, even though the thermostat is not seeing anybody move around the house, we all sitting and talking to each other here. Right? So. So the point being is that, how do we address issues, like risk, the connectivity with data, there are other things in the platform. We'll talk about optimization in a minute, where that comes in.
And this fact about that, even after you have put a model in production, things continue to drift as we call them, right? And there are different kinds of drift, your input data might drift as happened during COVID input data being at my home, I might not at home? Well, in this case, I'm mostly at home, right? It might other things might change that. That's very common problem that you never have enough data to train a model. And then you might discover new things, the famous Tesla stories about everything that Elon says but Tesla's do still crash because clearly they have not tested certain things and you find out in production that you are seeing new data, right. So your model is not not good enough. How do you detect these things, using technology. But in the end, the people who have to do things with them, realize that some things can be fixed or something they need to do more work, etc. Right. So, again, stop me Shaban. If I speak for too long, I don't know how much how much you can go, how deep you want to go into each of these.
Right? So I think we'll Alex and Tracy, you you give some advice here? Should we move on to the next? Or should we actually just take a little more time on this? And then maybe come back to
making sense? Let
me know, I'll try to I just want to make sure we have enough depth on from your perspective on this one before we move on or?
Yeah, yeah. Naveen, if you're if I'm a an ML ops professional. I'm working in the in the space. What would How would you say it to me, as How does via an AI help solve this problem?
Yeah. Yeah, good question. So I think we actually split the personas, who we target, are, we are targeting our platform into three categories. The first category is somebody we call again, these are our names, called machine learning engineer or ml engineer. The second persona is the ML ops person, I would call it the ML ops engineer. And the third persona is this person who does the risk and so on, we call that the ML validator. Right. So I will answer your question for all three of those persons. Right. So the for the ML engineer, the big challenge today is that, and this is not necessarily a bad thing. There is a very large set of tools that data scientists are using already today to build the machine learning models, right? There's pi torch from Facebook, there's TensorFlow from Google. There's Azure ML from Microsoft, there's sage maker from AWS, and so on, so forth. Right? And this continues to proliferate. And that's a good thing, because I think that all of them bring new capability along with them. But that's a nightmare for it, right? Because now they have models built in all of these different technologies. And so This is a challenge for this ml engineer, what is what is the role of the ML engineer, they have to take a new model in, and then onboard the model into production, right. And onboarding, in addition to taking this model in, they have to figure out how to connect the model to the production sources of data. So what does that mean? So typically, when a data scientist builds the model on their desktop, or whatever else, they may not have access to the actual production data for privacy reasons as an example, right. So this ml engineer then takes the model, which might come from a diversity of the tools, and they connect it to whatever the production system is the ERP systems, CRM system, whatever, right? So the first thing that the platform does is makes this a lot simpler to do, lets you actually operate think of it as a simple interface that hides this complexity of the tool diversity from the ML engineer, right? There's another thing that they do is when we talk about optimization, but just to prove that these these models may need to be optimized so that they run more or more efficiently on the target infrastructure, that is also something that the platform will let them do today, today, they are not able to do that period. Right. So gives them the ability to manage costs. So so that's the ML engineer. Then what happens once this model has been quote, unquote, on boarded, it continues to run. Right? So And typically, the person who is responsible for making sure that everything is still working is is not necessarily the same person as ml engineer. Yeah, it could be but doesn't have to be the we call the ML ops engineer today. They have little or no visibility, take take my nest thermostat example. That is the model really working? Typically, all they have today is knowing Okay, the model is up or down, is processing 100 requests a minute, or whatever thing is, but that's about they have no idea whether the answers is getting a right or wrong. Because they they they that answers could be wrong for many reasons. As I said, again, unlike software, where it's very deterministic, is very few answers that a typical software program gives. Ai model gives a range of answers typically, and as long as the answer is within some defined ranges, okay, but outside that, but why it goes outside the range could have many reasons your data could be wrong, like your conditions may have changed, et cetera. So we providing an ML ops engineer much deeper visibility into when something is not working, right. So that they can do the right alerting, they may, they're not being necessarily asked to fix it, because that requires a data scientist potentially get involved, but at least to tell them when something is going wrong. And they have very little visibility into that today, which the platform helps them do. And on the model validator piece, I think we kind of talked about it.
Again, these models are opaque. Today, right? We call them they are not interpretable. And this is where the validator potentially spends. We heard from on bank, one of the largest banks yesterday that it takes a validator about three months in that bank today, three months to validate a model. Right, simply because the processes are very manual. They don't have visibility into the model. So they have to talk to the designer to say, Okay, explain to us right 100 page document, literally 100 page document to explain to us what's going on inside this model. Right. And we're trying to surface a lot of what's inside the model directly through the platform. So they may still have to talk to the designer, if they don't understand something, but at least the process of surfacing the information is a lot more automated than relying on only talking to the developer of the model.
Okay, I'm unmuting myself. It's totally awesome. You should ask the
questions. Alex. That was that was where we should have started.
Oh, no, this is good. Naveen part of this, just so you know, is we're gonna go through a couple of iterations on our own, identifying what's the best way to get an interview and, and get the meat for that particular target audience. So it's a it's a process for us. We're, we're learning and all
of these things. Again, all of these things have things that are somewhat well known, right to the industry to the competition, and this each of them have things that are unique and special about what miner does and how we do it. So I'm not spending a lot of time on the second thing yet. Right? Um, so just know that
Joe? Yeah, do you have of time, so I want to be respectful of our time Shabana. And, Naveen, do you want to? Do you want to take the neck? Do you have a few more minutes? How do you wanna do this?
I think we should cover the optimization topics so that we at least cover some of that. But I think I wanted to kind of make sure that we do want to highlight when when we write things about so many of these things are not new to the industry to the ML ops folks. So we while we, while we have a view on that a little bit, I think there are clearly unique things that we do in each of these areas. So
maybe, maybe just to summarize on that. Can you on that. So if we would say, call out the challenge of collaboration is not a new thing. It's also not new for this particular space, it looks a little more complex, or it's more complex, but it's not. It's not a new concept. So how are we addressing it in a unique way? Or what is the like, what is the the unique thing that we that we're doing for known problem? Oh,
yeah. So I think maybe, again, I said, I'm maybe I'll talk about one aspect of the uniqueness. So again, if we look at the the validator persona, or the ops persona, who are part of this collaboration, I think, especially so what we call overall, how do you monitor a model once it goes into production? There are like we have identified, like seven things, that seven different kinds of things that are important to ensure that the model is doing what it's supposed to do, right. And if you look at competition, they might be doing one or two of these things. So what what are examples of some of these things? So one is something called drift, right, which basically says, and then there are different kinds of drift drift basically says that the model is no longer working the way it was working when it was first tested. Right. And the drift happens, as I said, for multiple reasons, the data itself changes the incoming data changes, which happened to a lot of models during COVID. Another thing that happens is that the assumptions that you want to predict are changing itself. Right? What does that mean? The the assumption is that people would like to go to work or need to go to work every day, that assumption is no longer valid, right. And that is not just a data change, the fact that I don't drive my car anymore, the fact that people's preferences are changed, right. And that's, that's also that's called concept drift. And then there's, there's an aspect of just the model, seeing new data that it hasn't seen before. That was a Tesla example, even though nothing has changed, right? There always cars on the road, and there are still cars on the road. So these are, that's, that's what we call drift. There's an aspect around uncertainty, I'm not going to go through all seven. Uncertainty means it's kind of related to the fact that today, people have a hard time figuring out if they have the right kind of if they had the right kind of data to train. Or if they had enough data to train, right, these are two slightly different things. And it's very hard to figure this out, even for the best data scientists. And we are actually creating some unique IP here to make it easier for you to do right. So we can go say, Okay, we feel that your model is not going to be reliable. But we can also tell you what kind of data you should go find to make your model more reliable, right, that fits into this whole area of uncertainty. And the thing that I think we briefly refer to, is this whole area around, how do you make the models more explainable to people? Like auditors and validators? Right? So a lot of the tools today surface very technical information, that is not kind of very useful to somebody who is not in the AI domain. And we want to put it in, in in words and ways and including user experience that they are very familiar with. Right. So So yeah, but for other things like that.
Yeah, those are great examples. Yeah. And I'm glad you didn't share the other ones. Kind of leads people to want to know what they are. So Shabana, yeah. And Naveen, our goal, our goal today was to get through two topics, but if we find that like, we need to just spend a little bit longer deep diving into one topic, that's totally fine. So where do you want to go from here? Sramana
I'm okay to go a little bit longer. Naveen, what is your what's your flexibility to do that?
You guys, you guys stand between me and my lunch. So
I Wow, it's two of
those days. But ya know, we can go on.
So do you want Alex? You know, I want, I just want to make sure that we get the depth we need so that we don't feel like oh, well, we, you know, like something was missing on this particular topic. But we moved on to the other one. So what do you want to just do a lightweight discussion on optimization? Or do you want to go, you know, like, explore a little bit more on the collaboration topic. What do you think? Yeah,
I think I think we should go just stick with collaboration. We want to get to optimization, but I'm not sure in like five minutes how far we'll go with optimization. Yeah, agree is kind of my thoughts. So let's go. What's? So Naveen? Are you open to chatting with us another time about optimization? Yeah. Okay. Great, then yeah, Shabana? Let's, let's dig in a little bit, just take the next couple of minutes to talk about, you know, a little bit deeper into the, the collaboration. So
yeah, I think the one of the things I think that would be interesting to talk about in the and I put this a little bit in my note as well is, if we if our you know, if what we're seeing is sort of a point of view, if what we're seeing is that there are these gaps in talent, and tools and technology. And we're not necessarily, you know, there's a shortage of data scientists, there's this talent part of it. However, we might say, we're creating tools and technologies that help to alleviate that pain. We're not solving it necessarily, but we're alleviating that pain by building the types of tools that that, you know, help with the collaboration problem. How do you see that?
No, I think yeah, I think the big question, right, so maybe, again, use my trick of using an analogy, right? So, but this is kind of closer to this space, right? So there are people say all the time that data scientists spend 80% of their time figuring out how to get data to train their models, right. And, and that's not necessarily untrue, right. So they do two kind of activities, primarily, right data, they have to collect the data, find the data. And in almost all cases, in big enterprises, this data is not owned by a central organization, right? So there is a lot of effort that has gone on. And this is not something we do we leverage that all that all that capability, like companies like data breaks, and snowflake, etc. Right? Who were making a job simpler. Actually, we we are actually doing something in that I don't know whether we have announced the the detail stuff Shabana, but that's an example of how, why you're not making it easier for the talent problem, like can I take away this burden of 80% of an expensive data scientist time getting them to do building models, which is rarely what they are trained to do, rather than they spending 80% of their time trying to collect data? Right? So, so a similar thing. So that algae here is that a similar thing is happening in companies, if you don't have good collaboration tools, to take the model that this data scientist has built into production, right? So data scientists are spending a lot of time as an example, this is very common. Our tools help with that as well. And okay, I've built a model that seems to kind of work on my laptop. Now, I want to run it and test it at scale. Well, sorry, is going to take one month to provision infrastructure, because the IT team has their own processes and stuff to do that, right. Then, same the example that I gave, okay, once the model goes into production, and you find a problem monitoring, how do you get that information, the monitoring information back to the data scientists in a seamless fashion so that they can look at it to figure out how to improve or fix the model. Right? So, so why, so if you look, if you think of if the ideal scenario is that all the data scientist does day in and day out, is to make the best possible model, right, using the tools that they have at their disposal? How can you minimize minimize or complete elimination and other things that they have to do to get their model into production? Or even to create the model that was, and that is what the ML ops platform that we're bringing in again, it's the same analogy to DevOps, right? Because the reason we went from waterfall to sort to Agile is not necessarily because people figured out that Agile is a good way to do things they Because the tools became available, that allowed this Agile software development to happen without a lot of overhead of doing things, or pushing stuff into production on a regular basis, because that had lots of overhead, and the DevOps tools removed that overhead. So the ML ops tool are meant to remove the overhead of moving or operationalizing. The models, as we call them into production lot of time, which the data scientists today spent doing, which clearly they don't need
to. So if we gave that data scientist Vinai, how much of their time would they spend collecting data versus building models?
So the data collection part is not really where we spend a lot of time on. And it's I think there are the tools like data, bricks and snowflakes is what we would love, right? So my last platform, which seamlessly connect to the data tools, right. And then the next thing they do is they build, build the models using whatever tools they use. Again, this is not something that we offer, they might use Sage maker from AWS or Azure ML or TensorFlow, we come in primarily on the face that happens after that. Right. So I was just telling that the in the second phase, while it is less visible today, because not many companies have models in production today. Right? But this is coming. Right? So we can add a little bit getting ahead of the problem in from a scale standpoint, because we know otherwise, the data scientist will next be asked to spend a lot of time operationalizing the models, right, and making sure they're still working. So the same way as the DevOps tool, help them operationalize software. ML ops will help them operationalize models, AI models, right. And that ICM is a significant time saving for the data scientists as well as the people in between. So that that's the collaboration aspect, how do you enable the collaboration which is necessary, between the data scientists, the IT folks, and and the people who are in the risk governance space? How do you make this collaboration much more seamless than it is today using technology?
And if it is my last question for you, and if, if a company had Vinai, and a similar company did not? How would that benefit? How would Vinai benefit the company that had it?
I think all the things that we talked about, I think we haven't talked about the cost piece, right? So the mix is optimization. So I assume there's going to be a significant saving in cost. When you actually running these models. There's how quickly you can get models into production. So there is a cost associated with if you don't get things on time. So you would you would get more models into production faster, because the overhead is lower. Right? Hopefully, you are building models that you don't find out post production, that they had problems from a buyer standpoint, from a standpoint of that they're producing the wrong data around wrong outcomes. Right. So So you want to know. So companies will try to get around, let's say the risk issues by slowing down, models getting into production, right, which is what this bank, the example that I gave, it takes some three months to do it, even though they have 600 people doing this. Right. So yeah, if if in this case, in the financial industry, the risks are too high, from a regulatory standpoint, if you get a bad model into production. And their approach is to slow things down. For somebody who, so so that would be the difference.
Yeah. Thank you very much. Cheban. Any other questions from your end?
Right? I don't think so I would just because it's so much optimization piece is so much a part of all that we're talking about? Can we just take a couple minutes, just to just to describe it within the context that we've been talking about. So the complexity, the collaboration, the better, you know, the this company has it, this one doesn't, that optimization piece actually matters a lot, even as part of just the collaboration and other things we've been covering. So I think it's worth just taking a couple of minutes on where that fits into sort of the the personas that you described, you mentioned it a little bit wet that you know, sort of where it happened, where who's responsible for that aspect, or who addresses that? And then, you know, because that's such a differentiator for us how, you know, just just a couple minutes on that I think would be worthwhile. And then we'll wrap it up and you can have your lunch.
Yeah, so I think there is another aspect of machine learning models that why it's different from software, traditional software, which is where the optimization piece comes in, right. So um, And these machine learning models are getting bigger every day. Right? So you probably heard stories about the latest language models have like 540 billion parameters or something, right? Unlike the days of software, where people knew how to build really efficient, efficient from a CPU standpoint, from a memory consumption standpoint, the young data scientists today are more used to building very large models that needs large amount is not just CPUs, but GPUs, right from Nvidia, people are making new hardware specialized hardware like GPUs like NPOs. Right. So these AI models are requiring people to, to invest significantly in infrastructure, which equals money to actually train and deploy these models. Right. So that that is an it even to a point where this might became the my might become the exclusive domain of the five, seven biggest companies in the world. Right, at least to run, these are the only people who can train and learn run the largest models, right. So that's the backdrop that the cost of training and running these models is continuously going up. For reasons because the bigger the models get, the more the better the more accurate they are, that is a given. Right. So So what Vinai wants to do is to try to elevate the situation as much as possible, we cannot solve the complete problem. But for a certain class of models, we call them models that are based on tabular data. I mean, these are not natural language models, not image models. But models who have structured data, which is the most common kinds of data you find in large enterprises, data coming from your ERP systems, data coming from your CRM systems, data coming from your HR systems, this is structured data, right? For those kinds of models, how do you actually make them run in a more cost efficient manner? Make them run faster, right? And these are both important. Why? Because I don't have to upgrade hardware. Right. So if I can make them run on existing hardware, I basically save money, right. And this is even more relevant, actually becoming even more relevant if you're trying to deploy AI models in edge devices. So if you have, I'm sure all of you have cameras in your house, these cameras have more and more AI running inside them. So they have limitations on power. They have limitations on this on the on how good the CPU inside is, right? So how do you then model standing inside your watch or your phone phone actually is a pretty big device, but whole bunch of IoT devices on the manufacturing for in the data in a warehouse etc. So we want to enable AI getting into all of these spaces in the edge, as well as if you run it outside edge to run it efficiently and so on. That is the intent behind taking. So the coming back to the person or the show, man, I'm saying so this ml engineer persona that I talked about earlier, who's responsible for onboarding the model. So one of the capabilities that we want to provide them is the ability to take a model that a data scientist has built and automatically do a translation on it, support it in the platform that achieves this optimization that we talked about, right? Without loss inaccuracy, there is a slight loss in accuracy that can sometimes happen, right? But then you have this model that can actually run faster, run on a lower cost device, you're on an edge device, and have doesn't lose any of the good things that the traditional model. That is the idea. So what but what you're building is a software solution. And that software solution can then run on any of the hardware improvements that are going to come out anyway. So somebody still wants to run it on a modern piece of hardware that some that let's say one of these companies have built it can still run on that and do an even better job, right? And so it's multiple dimensions, making the model run faster, cheaper from a software standpoint, and then make it run even better on specialized hardware if that's what somebody wants to do, like a phone or a watch or a camera.
Right. Fascinating. All right. So, Naveen Shavon Are we good for now?
Thank you. Yeah, I think we we That's a good stopping point. And then we will you know, maybe Alex and Tracy we can regroup a little bit and then you know, plan for the next plan for the next topic.
Yeah, I'd say hey, Naveen, thank you so much. It's a pleasure meeting you. Thank you for hanging out with us and with your cat on Friday afternoon. I will look forward to chatting with you again soon. Shabana Are you able to hang for just like three minutes? Yes. Okay. Yeah, by you. Okay, I, I just have a Tracy you can go ahead and probably stop recording. Yep. Gotcha.