I am Shir cherev CTO and co founder of deep checks. In deep checks, we do continuous validation for AI based applications. So it means, from the research phase, when you're iterating and trying to find the best version and how well it performs and what are its problems through CICD, the decision to deploy it and also evaluating in production. Of course, a big part of our focus is evaluating LLM based applications, which is a bit different than evaluating like LLM foundational models, right? Like, there are quite a lot of benchmarks, but for real life use cases, there is a bit of a focus, focus shift, and because, due to llms, you know, we train less models, because it's more the external like API or model vendor, so I do some training myself, but that's more like running, hide, surfing, and not necessarily stuff relevant to models, but still fun training.
Hello, everyone. I'm Yubi Chen. I did my PhD at Berkeley area research, and then I did my postdoc with Yang Kun. Now I'm assistant professor at University of California, Davis and also co founder at Asif. My lab works on trying to push the frontier of self supervised learning. That's why my lab called self Supervised Learning Lab, and Asif is working on pushing the frontier of the most efficient AI systems in the world.
Thanks everyone. And I haven't introduced myself, so my name is your own singer. I'm CEO and co founder of robust intelligence, which is a company that deals with AI security. Okay, good. So again, for everybody, I want everyone to take maybe 30 seconds to give us your own definition for what LLM safety is, in your opinion.
I mean, for me, I guess with an investor on I tend to look at, you know, the business impact, the economic impact. Of course, it's, you know, human impact too, but just across the board, and mistakes that we can avoid that are going to end up being most costly. I think a couple of mistakes are okay, but how do we build systems and culture that we notice and notice and see mistakes early so that we can avoid bigger mistakes later?
Okay, so, yeah,
there are many great definitions I'll give you Michael Connor, the way I see safety is a controllable LLM, so if a model is safe, that means we can control it. And what that would mean is that under any settings, any conditions, we the model should be operating under the guidelines or the policies that we have defined further and that it's not resulting in any unexpected or harmful errors. Okay,
Irene, what is your definition?
I didn't get the 30 seconds in a memo, so I'll try to condense, in addition to meeting technical requirements of the model, I think an important piece for me is to consider the end to end process of the OM product front creation to the end of the user journey. So for instance, when we create, do we have people with diverse backgrounds, demographics to anticipate potential risk in the middle working on this model? Are we supporting the professionals working on this to make sure that they are safe? And lastly, not to forget the less piece of CX, UX, right? You have great safety feature. But what is the CX following to support a user has a good experience with your safety features?
Sure. So I think when we talk in general about safety, we're talking about avoiding unwanted behaviors or dangerous behaviors, and it changes a bit between different use cases. So we have, for example, places where we may be worried about PII leakage as one example, about bias or ethics, and there may be also considerations that are really relevant for a specific use case, like we can put like we wouldn't want it, to recommend how to do something that we don't want to do within our company or our product, or recommend a competitor. So that is kind of on the verge between a safety and like company company policy and guidelines.
So I'm going to give a slightly different angle. And if we think about safety as a spectrum, on one of the end, you will have mathematically performable safety, right? So that's mathematical models, very rigid, but you can prove that safe. But on the other hand, when you have AGI, right? So and then you will have something that you don't really want to test, right? So let's say I test you for an hour to say that you can drive, and then I agree that you probably cannot drive. But why is that? Because that's a generalization induced safety, right? So any system should be in somewhere in between, right? But you have these two posts,
nice, yeah, that's a, you know, I think we have, we have a different perspectives here. And I think that's, you know, that's really valuable. Everybody has their own definitions of safety, right? So it's, it's good to have kind of, like holistic understanding. So me, I want to start off with you, and I want to ask, what is the stage, you know, for all the people here building models and thinking about it, what is the stage that you think is the right one to start thinking about, AI, safety and alignment?
The short answer is every stage. So, you know, we need to build models that have safety as an inherent feature and not an afterthought. And the way I like to divide it is in basically three parts, right? Number one, you define the guidelines or the policies that you want your model to follow. And this would include, like, you know, so many harms that folks just talked about, not doing AI Gage, not recommending competitors, so on, and more like you can start with the general definition, and there are many, like, ml, commons, this ai, ai sector for safety, that creates all these taxonomies, but more often than what you also would want to iterate, fine tune based on your product, right? As an example of healthcare, LLM and people might relate to it would require probably zero hallucinations, right? Compared to sales story generation two. So first step, I would say, is the policies, then, then. Second step would be, now you attempt to align your model with these policies. And this alignment where you do it can depend based on what stack of the model you have access to, most people have access to the pre trained bits, and so fine tuning and preference learning, that's a that's usually a great place to be able to do alignment, and that includes collecting good quality human data. You know, fine tuning a model on top of it, even if you just have access to the API level, we can still build out things such as on the system layer, such as classifiers. This is where all the prompt engineering solutions also lie, and so on. And finally, once the entire setup is made, you have the policies. Now you've attempted to align the model, then the throat is monitoring where continuously you would want to evaluate your model and check whether the model is following the policies you set for it, or, in case there are new harms that you are discovering that you want to, you would again want to, you know, go back to the drawing board for and Do the three way loop again. So
that's fantastic. So it sounds like, you know, the answer is that we should think about this at every stage of of kind of like building training a model, right? All the way from free training all through post production, so and, but I guess, kind of like, well, what we're also interested is beyond just the model, right? So if I think about, you know, more holistically, what do you think we should be doing beyond the model, to ensure safety measures for for AI and elements?
I think, yeah, I mean, I think the, you know, the model itself, there's a lot of strategy and techniques, and techniques, and I'm not actually technical enough to speak on the model, but there's plenty that you can do, even on the orchestration layer, in terms of, you know, how, what do you give it access to? Like, a lot of the logic happens in code. You can, you can put blockers in the code. You can add humans in the loop. As we as we start actually using these coaching people on how to use them. I mean, if we think about when emails came out, email was dangerous, because a prince could tell you they were going to send you money, and then you would give them your personal information. But as a culture, we evolved. Everybody now knows you're not supposed to send your personal information to a prince who wants to send you money. I think we have to learn those same lessons as a culture as we as we learn to build with LMS, so I don't think alignment is just about the model itself, but about the orchestration, about how we use it. And then when it comes to aligning the LM, who are we talking about aligning with? Like, do I want to align it with me, my tribe, or do we want to align it with the global values? What are global values? We're not aligned as humans. So I do think aligning AI is not just a conversation about aligning AI, but aligning AI with humans and humans with each other, and just build toward harmony, Kumbaya. So
basically, you think of the humans in the loop as a critical aspect of it, and thinking about diversity is a is an important piece of this. Okay, yeah, I agree. So as kind of, like, we're thinking about what is the best way to build Alim safely. You may, I'm curious. You obviously think about this quite a bit. So I'm curious, like, you know, they're basically one can say that they're, you know, kind of a school of the data driven approach, and then there's a school that is white box. So when it comes to building safe or, you know, secure LMS, what do you think is the right, the right approach?
So this is going to involve quite a bit speculation, but I like that question, right? So it's so essentially, as I said, there are two poles of the safety, right? So on one hand, you have the mathematically provable safe, right? So because it's mathematics, you prove it safe and it's safe. On the other hand, you have the general intelligence, that human level intelligence, that you trust it's safe, right? So you don't test it very thoroughly, right? So I feel that we are quite a bit far from that, because right now we don't have either all that. On one hand, we don't have the mathematical theory for the model that we are using. On the other hand, the generalization of our model is not at the human level. I give you one number. This is a very important number to keep in mind, and that's probably the most important number to keep in mind. Think about that when a baby is born, right? So every second you can acquire 3030, tokens, right? And you do that for 12 hours per day and 20 years, how many tokens do you get? 10 billion right? So you get 10 billion tokens, and now our language models are trained with 10 trillion tokens, you have three orders magnitude data, right? So to actually achieve the generalization actually less than human intelligence. So that means the generalization of our model, the emergent generalization of our model is not quite there, right? So you still have a large gap to go there. So the data driven approach right now, even though that's one of the most important reliable principle that we have find in these skinny train, right? So it's not, probably not all you need, right? So not it's not enough, right? So,
so it's not enough because we don't have enough data. It's,
I think it's a data is very important. High quality data is even more important. But somehow we do not have find the principles right. We leverage on the emergent when we say that emergent property, that means we don't understand it.
We still haven't found the right, you know, algorithms, the right training methods there. There's more to be discovered on the on the model level, than there is
exactly so before that, before we have the truly general intelligence. I think there are two approach. And one, one hand is you, that you build the data simulator to to test your model as well as possible, right? So you try to build the world models in that sense, right? So you want to simulate every aspect of the world so that you can use that to test your model. On the other hand, we shall also turn to the mathematics and theory and think about, okay, can we truly understand our signal space? Because that's a high dimensional space, right? So no matter how hard you try, you probably won't be able to actually exhaust all the possibilities in that space. There always the possibility that you will find the next sample that will deal break your language model, right? So can we actually discover the mathematical principle where that generalization come from, right? That's what we call the white box models. That's an incredibly hard topic, right? So can you actually build something that truly understandable, but meanwhile, can still be competitive in that sense, right? So I think these are two intermediate approach, right? So on one hand, you build the word models, right? So can reflect a real world aspect as much as possible, so that you can use that as your data simulator, your tester to test your model. On the other hand, we should always not give up too early to build the theory behind that.
No, that makes a lot of sense. So it sounds like we have progress. There's still more papers to be written, so that that gives that gives you more work to do. Yeah, that's a good news for me. That's good news for you. And so and yeah. And then there are more, the more they're more algorithms to be developed, which is good news for all of us. Okay, so you mentioned jailbreaks a little bit, and I want to transition into talking about guardrails. So share. You think about this quite a bit. You You know, you're thinking about like, building guardrails, implementing guardrails. And you guys have, you guys are seeing kind of across the board what happens, you know, with people implementing guardrails. So I'm curious, from your perspective, what are the biggest risks when you know? Is it you know, people or enterprises are adopting guardrails incorrectly?
Okay? I think, of course, a big part of being able to build and trust LLM based applications is to have safety measures and guardrails in place. I actually want to maybe give a bit of a counterintuitive perspective on the topic of safety and implementing guardrails and risks of LLM safety in production. And that is that, of course, the risks of not having aligned models and not having like, clear policies and adhering to them are immense. The thing is that no matter kind of what you do, I don't think we'll be able to achieve the 100% kind of zero mistakes, right? We will be able to decrease them significantly. But I think sometimes people approach it like, you know, what if we make maybe, like a super, I don't know, we will hire a wrong person, or we will make some, I don't know, not great statement, and that may be kind of the end of the world. And I'm not decreasing the risks. I think they are significant, and that's why we're actively working with it a lot. And I think the counterintuitive way to look at it is, what happens if we try to be 100% risk free? And that is actually something that we're seeing with some of the use cases or companies that are in various phases of trying to adopt now llms in production, and that is basically the risk of overcorrecting and saying, Okay, wait. So if I need you know, there is this can happen, this and this. So maybe I'll wait a bit more before I adopt, or before I even try it out. And you know what will be if it will do something that I don't predict. And there is a delicate balance here. And of course, it is very much use case dependent. But if now, let's say your product, I know E commerce company, and you want to extract automatically product descriptions, or your financial institutions, and you're building now some internal use case, but you know, it's financial data you don't want to, you don't want to take the risk of something inappropriate going out. The thing is that if you're too careful about it, it's very much likely that you will find yourself eventually, one, two or a few years back, kind of in the in the race to, you know, build better technology and be more robust and adaptable to to whether it's in general, the tech you offer, or, you know, the ecosystem in general. So I think that is sometimes that's, like, a different kind of risk that isn't necessarily taken into account when you're, like, trying to build your your risk profile, and that is, what is the risk of not taking the risk? So, yeah, that's, that's a recommendation to to keep, to keep that, uh, to keep that in mind, because technology is evolving, and we have to find, find the balance of keeping safe on one hand, but then not not stopping ourselves on the other hand.
So, Sherry, you feel like there's a, there's a trade off. Um, so sounds like, you know, the the safest LM is the LM that rejects all, all, all requests, right? And, you know, so that's, that's a very safe way of implementing an LM, but it's somehow, like, not, you know, but it's not functional, or it doesn't function in the way that we would like. So there's an obvious trade off there that we need to, you know, that we need to identify,
by the way, I think, I think the trade off that we see, it's, and that is very prominent, is when, like, we saw that more safety, you know, like, guidelines were inside the foundational models, and suddenly their quality deteriorated. So that is like a trade off that we see as users of it, saying that the same trade off also exists when, like, yeah, when basically using LMS and incorporating into your company strategy. There's a
weird parallel with the like, I could do it better, and people struggling to delegate to a person. But I feel like you're essentially saying that same issue happens to delegating to AI is, yeah, if you, if you keep thinking I could do it better, you're never going to delegate. So you're never going to scale beyond how much you can do.
Right, by the way, one of the challenges there is accountability. Like, I think putting like a side note, like, look at autonomous driving, right? So maybe they drive much better than human beings, but still in the one one time when there will be a problem. Okay. So what do you do now when it's human beings? Okay, you know, you can blame something. So I think that is maybe part of something we need to solve in order to be able to, like, yeah, to take that risk,
yeah. For, I think for me, kind of, like, very pragmatically, what I'm a little bit frustrated with is the lack of benchmarking around, around security, safety, right? Because, as we said, like, kind of capturing these trade offs is hard, right? Like, obviously, you know, can have a very safe LLM just doesn't respond to anything, but you somehow want to capture, you know, certain, like, kind of an LM that can answer, you know, certain tasks, but, you know, but it's still secure against others. So somehow, I feel like we still haven't, as a, you know, as an AI community, have a nail measure it. So this is, this is why we have good people in, you know, in academia, pushing for, you know, for these
things, before we find the theory, maybe, I think that there's a very interesting blog written by early pioneer in this field, Devian Manford. Some of you may know him, right? So, early pioneer in the sales before language models, right? So like Shannon in the 50s, right? So when, actually, when Shannon created information theory. The first game he enjoyed play with his partner was predict the next token. So basically generated with pre training, right? So, so, but now we have these language models, right? I think you know, but we are very far from that theory. But before we you know, there's the blog by Devin Mumford was that maybe for this generation, we can never have the true theory for our AI, right? So I'm not sure if I buy in that, because that sounds like I shouldn't quit my job, right? So, but on the other hand, I think before we have the theory as a community, we should try to build better and better benchmark, bigger and bigger benchmarks, because this is a high dimensional space, right? So no matter how hard you try, you probably cannot exhaust all the possibility, right? So that definitely need a community effort. Yeah.
I also think, just like creating benchmarks for language itself, or evaluating is an open problem, right? Like, if to think at a high level, the responses we are giving as humans right now, how do you evaluate it's a good response or not? So that's the kind of similar problem. But seeing, I feel that benchmarks, of course, we have some of them that have a one word answer, like the interviews, which are the most popular ones. But then, of course, the issue is they're not representative enough. And to that, I feel that we know we will, of course, get better at winning benchmarks, but human evaluation is the one thing we probably keep needing and doing more of. If
that's another, I think, another possibility, I think that has a great potential, is to do mechanistic explanation, right? So, like, uh, my group does that try to visualize the transformers, right? So, and after up, because they do that as well. And so I think, I think it's pretty much using what have already been used in many different sciences. You try to find why your model is doing what it's doing, right So, and then if you really understand why, then you can improve right? So I think that's building the debugging tools in that sense, and based on mechanistic explanation, which has been used in Computational Neuroscience for a long time. I think it's a very important direction as well.
Very cool. Yeah, I agree. So kind of, you know, we're talking about benchmarking and building models and training. But Irene, I want to turn this next question to you, because I feel like you have a different perspective than, you know, than some of us here. I want to take a step back and even ask, you know, why is it so obsessed, and why is it that care so much about, you know, LLM safety, and just, just to, kind of, like, put this in context, right? Because, like, you know, I you know, you know, people surf the web, and you know, there's, there's pleasure of safety, but, you know, but we have access to things that are available on the web, right? And you know, if there's a, if there's a nasty or bad website, you don't, you know, the search engine accountable, right? Similarly, there's social media, right? Also, kind of like an ongoing debate about how much you know the platforms right, should, should own and mitigate, you know, safety. But somehow, with llms, it feels like there's a sort of scrutiny or expectation right around, around safety. So I'm curious about your perspective. What is it about llms? What it is about generative AI that you know, triggers this kind of response and expectation about about safety? Yeah, that's
a great question. I don't think that we are obsessed safety, because we have fundamentally different expectations of some basic safety expectations, for instance, some of these, I think all of us can agree on right like LLM and Webster should not produce CSAM should not produce hate speech. All these basic things can't agree upon, but I think we are so focused on LM safety, and in my opinion, rightfully so, for three main reasons, right? One is novelty. Second is how fastly is changing and developing as we speak. And the third part, and I know Yohei will agree with me on this, is it's huge potential to fundamentally change who we are, how we see ourselves, who we are as a people group and a society. So I quickly touch on each one of them, right? So web search, we there are plenty of problems when we first, decades ago, when we had this new development, I think we are at a similar stage now for web search. We had decades of time to build our trust and safety practices, our procedures, expectations, protocols, but right now, AI, we're catching up, even though there's foundational research that's probably happening for decades, but it only exploded in a public consciousness recently. So there's a lot of new problems that we need to solve, and actually we have quite a lot of clients coming to us, right? Like we are developers create this great LLM, we don't know anything about trust and safety. We don't know CSAM how to handle, you know, response with law enforcement and all that. And how can you help us? So I think there's a lot we can learn from existing fields. And the same time, they're they're not exactly the same problem, and we to learn how to solve that in this space. And the second piece is, as a result of fast developing nature, we are seeing a lot of downstream changes as a result, for instance, very clear cases of egregious content right before we have human content moderators moderating it. Now it's has been replaced by AI, but what tasks are left to the human is those edges, new cases, new problems to solve, right? Like I mentioned, there are some jobs nowadays I couldn't have imagined decades ago, and we need to rise to that challenge and see what kind of psychological traits these professionals needs. How can we support them. And because of, you know, like from content moderation, there has been media report on unsafe working conditions. I think there are similar concern on this aspect for LLM safety as well. And the last piece right the how it might change us. And for me as a psychologist, it just has been like the world has been turned upside down. We are born to crave human connection, face to face connection, human touch. But now we have aI chat bots, chat bot companions, you know, mental health AI based screeners. I think a lot of them are great, right? For instance, there's not enough mental health clinicians in the world to take care of everyone. So I think some of these are awesome and at the same time, at what point, right? Ai versus is appropriate is helpful, and what is the long term consequence for us as a human, how we see ourselves?
So you feel like the transformational, you know, potential, right, is large, is a big part of what makes the emphasis, yeah, exactly, yeah. No, I agree. Yeah, I agree. And, you know, adding to that, I think it's also the fact that it's basically, oh, it's like, we're using LMS now as public software, right? And then that kind of a, you know, then safety of the to do is, you know, is a big is a big piece as well. So I agree. I want to, I want to now, kind of like, take this last question, and I want to, I want to kind of get everybody's perspective. I'm curious about, what are the risks that that you think are kind of like most important. So, you know, at our company, last time I checked, we secure against 300 risk categories. I can't, I can't, so I can't even numerate, you know, kind of all of them. But you know, just to say that there are a lot of categories of risk that you know, that you know people are defining and people find important. So given that we have, you know, this panel of experts, I'm curious, you know, to hear from every one of you, what are the category or categories of risk that you think are most important. And then I'm also curious, you know, to hear what are the ones that you think are most challenging to protect.
I'll answer this in two parts, on a more like hands on practical, like, I'm using AI all the time. And here's the risk I'm worried about, is like, if I run a baby AGI and I forgot to turn off my replet, like my open AI credits skyrocket. So like cost, right? I think is more like a practical but I think if we think big picture, what I'm more worried about is, like, let's make, you know, how do we make sure they don't get access to, you know, things they shouldn't get access to that could be harmful. I actually think should be somewhat like, similar security protocols. So like, how we give access to different people, right? Like, in terms of, like, dangerous databases, dangerous materials, but, but, yeah, I think, I mean, there's, there's so much history behind how humans have solved
it. So data access, data access, you feel, is one of the biggest ones,
data access materials, I actually think it would be similar, like, in the long run. Actually think it's the same risks as like people, if people cannot, if it's dangerous for people to have it might be dangerous for an AI,
okay, good, curious to hear
hard one. It's like asking which is the worst crime for me. Personally, I think, given the landscape of LLM and how it's used, I would say anything that's assisting with human trafficking, especially if it starts to involve minors, that is really bad for our society. I mean, I know answers that assist in violence terrorism are bad, but my take on that is, if someone is searching for it, they'll figure out a way to do it, and it is contingent on, on their actions, but the human if, if llms end up generating content that can assist in human trafficking that is directly feeding into the demand. And for most challenging I technically, I think it's bias, because it's very hard to agree on the definition of what bias is. It's and then it's also hard to solve it. We've seen the examples where well meaning companies, when they've tried to solve bias, it has led to some mishaps, right? Like changing history,
typically the assistance, the like the LMS capability of assisting and heinous crimes you namely, human trafficking, and then also the bias, if you like, kind of those are some of the categories
I think bias is the most challenging to solve, right? And though, yeah,
Irene, curiously here,
I actually have a very similar viewpoint, if only we did not plan this. So when I first heard this question, the immediate ones that come to my mind are like offline harm, right? Human trafficking, CSAM, suicide, violence, terrorism. But actually, today, I want to spend time talking about something that's not as obvious, but I think nonetheless, we should keep in mind, and that's regarding bias representatives, inclusiveness, right? I will give you one example. Years ago, I had a fortune of treating treating one young man presenting with social anxiety symptoms. He has relatively high pitch, and he's an immigrant, so he speaks English with an accent, and as a result, he was made fun of, right? Because our society has certain expectations of what pitch or voice is considered manly. And it breaks my heart to hear that he wants to go through, you know, vocal core surgery. He wants to go through therapy to get rid of his accent. It breaks my heart, because to me, this is very beautiful, natural human variations in our experience, you know, in our bodily function. So I and part of it, I think about AI Right? Like your AI agent, AI Assistant, what kind of voice Do they have, what kind of pitch do they have, what kind of English or different language do they speak? What kind of accents do they have? Are you indirectly, right, giving the message of what is considered normal, good, expected, okay, so I think these things right, and I think it's hard, because, exactly like you said, sometimes hard to measure right. Think about my case, how much of the social anxiety symptoms is his genetics, previous experience, or the societal expectation? Right? I cannot give you a number, and so it's very hard to measure. And a lot of times for bias, right, the lack of harm is also hard to measure. How do you measure right something to not occur? So I know it's challenging and the same time for any developers out, I want you to start think about these things as you develop your llms.
Yeah, sure.
So I think till now, I talked mainly about various adoption of LMS in organizations. And then there are quite a few very critical risks that lots of companies are, like robust and like deep checks and many more, you know, get, give solutions that it will, it won't make it perfect, but I think it really, really, really helps. And in that aspect, I mean, there is what to be worried about, but I think we're controlling it. And I believe that just a bit like a cyber for organizations, it's there is some balance of, you know, improving the tech, improving the, how do we make sure it works properly, and so forth, the direction, personally, I'm worried about, and I think this is first time, like for LLM related diseases, it's a bigger thing is for intentional misuse, kind of in general, because something that I mean also cyber has intentional whether it's for, you know, fishing for getting a for fraud, credit cards for warfare. I think a big difference here, and that is also what makes it very, very challenging, is that the scalability of LLM based stuff is just incredible, because you can just with so much effort, and, yeah, cost, but even there, like no crazy costs, you can just make so much new data, whether it's fake news, whether it's a new code to try various stuff, various attempts for cyber, Whether it's so fishing, so there's, it's just like, so easy, and it also the barrier for entry, like, so little knowledge to try something and, and I don't think, like, I don't think there is a way. I mean, once the models are out, the tech is out, so when we want to use it properly, yes, we can put that. We can, you know, put the guardrails in place, but as long as there are just a few that want to use it not properly, whether it's the countries, whether it's organizations, whether it's crime, it's, I'm sure that you know, it's, it's something that personally Does, Does, does worry me, just because such amazing tech. Yeah, that was a strong felt that because my message,
hope everybody's okay. All right,
all right. So to be the last one I as a research scientist, I I I feel that my field has gone up and down multiple times in the past, right? So this is an incredible time to be in, because in the past, it doesn't work. Now it finally start to work and take off, which is very exciting, but somewhat I have to remind myself that, you know, because this, it just started to work, and there has been a lot of excitement, and we still have very limited understanding of all of these what we are doing. So I thought about biggest risk is that before we truly understand our systems, we don't want to over promise. We want to make sure that we claim to the point whatever we understand and what we do not understand, there's still quite a bit fundamental work to be done there. I think that's probably the biggest risk I worry about.
Okay, well, with that in mind, I want to thank everyone again for participating in panel. Been super educational for me. Hope for everybody as well. And yeah, hope you know to catch you all you know in the back and chat some more. So thanks everybody. Thank you. Thank you.