Thank you, Dave, and I'll say I've been pleased to host some some of these thoughts about content moderation in large language models on tech policy press, and to learn from his recent events that he's done at Harvard and a couple of other locations where he's talked extensively about these things. And I would commend you all to that. I'll put a couple of links just in the chat to some things that we've published lately, including a transcript of that talk, but also a kind of syllabus. Cathy mentioned I also teach at NYU lately also at Cornell Tech. So sometimes when I'm trying to get my head around something, my instinct is make a syllabus or make a make a draft syllabus. So we've kind of done that, on what I'm thinking of is sort of large language models content, moderation and political communication, which may not hold together as a as a kind of category of thing. But I think it might, we'll see in the long run. If you have some ideas about what else should be on that syllabus, let us know – it's not complete. It's just a sort of first stab at it. So take a look at that. But you know, I think they've done a good job in some of the things that he's written and said about the opportunity, particularly around around content moderation, and I think Cathy and Susan are thinking a lot about this in the context of counterspeech. And I'm going to put a piece that they wrote for tech policy press also in here, as well. That's on the topic of kind of, you know, how we should think about the perhaps the limitations of chatbots for for counterspeech. And I guess, you know, without kind of going into too much length, with everything AI it seems like the right thing to do at the moment is, you know, take take one spoon of experimentation and about five spoons of caution with everything you do because, you know we're running essentially, or many people are running live experiments with no control group and trying to kind of see what happens I think we're seeing across the world right now, an enormous number of experiments playing out in election contexts. And we're seeing a lot of money being spent. And it certainly I think in India, we're already seeing lots of very interesting phenomenon emerge. There's going to be lots of science and study of these things going forward. And I think I think it will hold together as a kind of category of research that many of us who are concerned with issues of tech and democracy, tech and politics, tech in society will begin to sort of see as a kind of field, or an area that builds on a lot of the work that's happened over the last, you know, decade or so, around social media. And then you kind of begin to look at it through the prism of generative AI and its impact on political discourse. So we're excited about that. Study, I should say, not necessarily about all of the implications of of what's going to happen. Because I think, you know, on that will remain perhaps ambivalent. or neutral at the moment and see what happens. But I look forward to the conversation today. And to hearing a little bit more from from the other speakers about what it is they're seeing with regard to these phenomena. And what are the questions they think the community people on this call should be asking because I think that's the mode we're all in is trying to form hypotheses form questions, or form experiments in some cases, and think through possible implications of of all of that above. So with that, Kathy, I'll cut a stop and turn it back to you.