just give it a minute for everybody else who may be coming.
going to be the odd man out here? I'll be on on camera do it.
I mean, I can join you for for for two minutes. And you know, Gustav's here too,
I guess.
How's it going? Man,
I got a puppy on Saturday, so the busy days,
it's like the most, yeah, I love it. I love it. Is, is potty training a thing right now?
Yeah. I mean, it's a bit difficult when you live in an apartment. But yeah, we are. We have got, like, one of those, you know, it's like a small mat or something that they can, like, pee on in the beginning. So, like, we're using that one and then taking her down, like, running from the building. That's like a small grass spot where we can take it down to, yeah, you can't really walk them yet, right, so, but she doesn't have the vaccine shell.
Yeah, you've got that little dog run right in front of the building. Yeah, yeah,
it's a perfect place, yeah.
What's the name?
Uvas? Like, yeah. It means, like, grape in Portuguese.
Nice,
sleeping here inside there.
Very cute. Hey, Chris, it was good to see you a couple days ago. Ben, you're on mute.
I think we got everybody here now. Hey everybody. So I'm trying to fight the urge to talk about dogs with Gustav, because I just got a puppy too. We'll have to share pictures later. But this is not the let's talk about puppies meeting. Thank you for coming to the to the meeting today. We have a lot of interesting stuff to talk about. I am just going to jump right in that's puppy season. Yeah, absolutely it is okay to do. Kat I'm not going to ask you to be the one sharing your screen this time, I'll do it so putting it on the spot like that.
All good.
Okay,
So.. sorry, it's taken a minute here, loading, loading, loading.
The suspense. All right, cool. Here we go. Okay, so today we're going to talk about the RFP process and fire starters like we have been, we are at a key moment where we've actually had a bunch of community reviews done, and we've gone over sort of different approaches to basically taking the information from those reviews and turn them into decisions. We'll walk you through where that's at. Had a suggestion last week that we, you know, pick something that we want to really do, a topic deep dive on. And it makes sense to talk about the reviewer methodology, like how we're actually taking the information we're getting from reviewers and turning it into a decision, which is also going to be part of an analysis report that we're putting out at the end of the week, you'll see in the request for help a link to the draft of that, looking for any feedback folks might have. And of course, we'll also go over the deliverable scorecard, as we do every week. So yeah, at the 10,000 foot view, that is what we're going to talk about today. I just wanted to take a second to also just talk about a cadence of meetings. Another suggestion that's come up in conversations with some of the folks on the board is that we switch to a every two weeks cadence instead of a weekly cadence. I think we're going to need next week's meeting for sure, because that's where we'll have the finalized version of the budget for basically the entire milestone. Like, you know, each one of the grant proposals you'll see is, you know, there's arbitrum of amounts that they've requested, and we're going to go into a series of conversations with the shortlisted 10 or 11 projects. You'll understand this more later, but long story short is like, basically, after next week's meeting, we're very much just going to be in kind of rollout mode where, you know, the program operators are setting up their programs are starting to make them happen. You know, we'll be facilitating and coordinating, but you know, might not be as necessary to have a weekly cadence of calls. So that, I think is the proposal moving forward is that we will stick to this time, hold the call for next week, but then switch to bi weekly, or every two weeks at the point after that. So maybe I'll just pause for a second just make sure we're on the same page about that, and that sort of feels right to people. We could also revisit this at the end of the call, once we've kind of gone through the details, but just at that 10,000 foot view, how does that feel to everybody? I see a thumbs up. I see another thumbs up. I see a couple thumbs up. I see Kristoff thinking,
yeah, like, my I'm not against it, but like, I'm not, like, I don't know, like, if you think that that it's enough, then that's great. But like, I've got a feeling that every week we it's not that we have too much time. And you know, our board at the call, but rather we run out of time and do not complete all the topics. So that was my impression so far. But obviously, you know, it's up to you. It's you know, your time that we the devoted to you. If you don't feel like you need it, then we certainly can do bi weekly.
Yeah, it's my feeling, at least as a full team, that it's just not going to be the most productive use of everybody's time, kind of once we get through what we're talking about today and next week. That being said, I was actually wondering if you might be interested in doing a one on one. Krzysztof, because I feel like you have a lot to contribute in general, and like I personally, get a lot out of our conversations. So I was actually planning on following up with you and seeing if you might be available for more one on one conversations, but we can revisit that. Uhm, Okay, well, why don't we just assume that that's what we're doing right now, and you know, just kind of at the end of this call, we can always have a gut check just based on kind of what we're doing. What we're talking about today. Sound good? Okay? Bless you Gustav. Hopefully you're not.
Thank you! It's allergies. I love the puppy, but just allergies in general,
Fair enough. Maybe it's these flowers behind me doing that too. Okay, so program updates. This is really the biggest thing that's happening. You've all seen this RFP process overview document. What is, I think, new here it has been in a dashboard, but we haven't really highlighted it as much as the actual RFP proposals. So this is a link to the air table, and we're not going to go through each of these right now. I just want you all to have access to this. And just kind of wanted to really bring it front and center so that you can see it, because the next slide that we're going to go into will be where we actually look at the the sort of tentative results in terms of the projects that have been shortlisted. So basically this is the same information that the reviewers had access to along with the sheet that they were using to apply a rubric. And I just wanted you to have that sort of perspective so that you could sort of see what information they were seeing and also get a deeper sense of what these projects actually are that are being proposed. So maybe I'll just pause there for one sec. I'll just click through just to show you what it looks like. I'm sure you could all do this yourselves, but just I'll take a second to do it. Share this tab so as you can see here is all the different projects. There was 23 in total, and we specifically asked for that sort of allocation methodology, just so you get at a glance see whether it was proactive or retroactive funding, how the voting rights were going to be allocated, like how people were actually going to be given the ability to weigh in on decisions within each of these. And then the platform that's being used, if any, to actually make decisions so that just is kind of the information at a glance. And then, of course, if you click on any of these, you get this pop up window where you can actually see their full responses to all the questions that were asked, and their budget and their milestones and all that good stuff. So yeah, just wanted to give you a quick little taste of that. Also want to point out that you can also see all the proposals for fire starters in here, and we'll talk a little bit about where things are at with fire starters right now. Long story short, long story short is had a lot of reviews done. Only one project has been approved at this point. We'll talk more about that. But SheFi is the one project that has actually been approved. There's still a few that are awaiting final approval or rejection. Haven't had enough reviews come in for the final three or four of these, I believe, but all the rest have been reviewed by at least five reviewers. Okay, so just that was kind of the quick little tour of what's going on in the air table. I'll bring us back to the agenda, and I think probably the right thing to do is just to jump right into talking about the results. So why don't I just take us to that next slide, and there is another link here. RFP, results. Rebecca, would you be open to helping to walk us through this? Rebecca is the one who has architected a lot of the formulas and whatnot that is going on here. Sorry to put you on the spot. Yeah,
I'd be happy to shovel down my lunch. Yeah, do you want to just share your screen Ben, and then at least have everyone? I'll also share a link in the chat. These are the initial results that have come out of the RFP. So this is basically aggregating data across. I think most of them have between like, five or six reviewers. And what you'll see here is we've actually just set up different columns around how this decision making can happen. And I think it's important to be very open and saying that, like, there are multiple ways we could go about this, and it does start to get interesting as you move past the like, very strong decision. So you'll see like those top six, we have pretty much agreement across the board, whether that's looking at a quantitative median score or looking at just like number of people who answered yes when we asked them. Do you think this should be funded? So we were pretty careful in asking specific rubric prompts where people have to give a score. And then also, just like, general questions of like, gut check, is this a yes or no? And what we're currently faced with is, well, you know, how do we combine that data? But if you take a look at this, we've got like, at least six that are definitely going to pass, or that's seven rather, and then these ones kind of where things start to get a little bit more kind of up in the air. The one thing to note is that we are still waiting for reviews to come in. So not all of these have five out of five. And I think in terms of next steps, we're just going to push for those final reviews to come in by the end of the day. And then I think also, just in terms of getting the board kind of green light on this. You do have access to those application sheets. There is an option to comment on those sheets. And we do have more data coming in on the allocation mechanism, and kind of just like summarizing what it is that these programs are doing. So I mean, I might just slip in a request for help right here, which is, like, we would really, really love to get feedback from the board as we're looking at this,
there was a couple of interesting takeaways as we were talking through the different approaches. Like you can see these different decision methodologies. Three Yes, at a median total of over six out of 12. Three yes and a median total of over five out of 12, just looking at the median total or just looking at the yes no, I think where we're landing is this three yeses and a median total of over six or equal or over six. And reason being is that that gives us kind of the best combination of, like, a number of people and the score, you know, and really highlights the projects that are most broadly supported by the community of reviewers. There are some interesting anomalies, and we definitely learned some things in the process of doing this review. You know, some reviewers were more stingy than others in terms of how they gave out points. Sometimes people would get a low score, but actually say that they think something should be funded. Sometimes people give a high score and said things shouldn't be funded, you know. So it was an interesting process, in that sense. Just to give you a little bit of behind the scenes, like, how many reviewers did we have? How many reviews were done overall? I think the number of reviewers we're up to now. Kat, you might have a better live number than I do, but I think it was 17 reviewers somewhere in the neighborhood. And what's
that? Sorry. Kat,
just doing a quick count. Oh, thank you. And so we asked each reviewer to do five reviews, and it was randomized to, you know, we basically just picked randomly which projects would be reviewed by which reviewer. And, you know, it was a pretty decent cross section of people. You may have seen the governance post that I shared in the telegram thread, 22 reviewers. Thank you, Kat, great, yeah. So what I have not done is updated the governance post with all the new reviewers who got added. We got a lot of sort of attestations from other community members who were suggesting people as new reviewers. And we're also learning about the sort of where the outliers are, like people who were consistently reviewing what you know could be considered the correct things correctly. And I put that in quotation marks because obviously, there's always room for disagreement. I think one thing that we may do differently down the road, and not necessarily at this moment, but for some of the retroactive rewards or other things that we'll talk about later, we may actually start bringing together reviewers for more of a discussion after the scores have come in, so that they're not biasing each other when they're doing their initial review. But then everybody can become sort of more of a like almost a guild or a community of practice where reviewers are, like, sharing information and discussing why they made the decisions they did, the ways they made the decisions they did. And there have been a lot of these reviewers who have been asking us for feedback, you know, we tried to, you know, basically, just be supportive and give just general direction, but not, you know, be sort of in any way biasing them towards one decision or another. So yeah, this just kind of gives you a sense of the different approaches that we could take. Originally, we had actually set the median at a higher level, just to let you know, we sort of first tested it with just a median score of eight, and that actually did not get us that many projects that we wanted to fund. One other thing we've also considered is the weighting of the different criteria. One of the criteria was around the sort of uniqueness of the funding mechanism, and in some cases, you know, a project was rated really high on the funding mechanism, but not so much on scalability, or not so much on sort of fitting within the kind of core mission of arbitrum. There was some discussion about whether we should double the score or increase the weighting of the score for the mechanism itself. What we've seen looking at the results just using sort of equal weighting for all of the criteria is that we still have a good mix of funding mechanisms, like, if you actually look at each of these different process or projects, the only thing that is doubled up is there are two that are quadratic funding. And I know that you don't you're not familiar with each of these already, but I can tell you from from knowing all the things that are in here is that you know, each one is representing a unique mechanism for making allocation decisions, and just as a kind of a hot pursuit to the discussion we had last week, I can tell you that that is all going to be incorporated into the analysis that we put into the report that we're putting out at the end of the week, or the sort of summary document that we're requesting some feedback on now. And also, you know, each one of these grantees will be asked to do a report at the end of the round, which is now all reflected in the milestone document. So you can see the moments where this is all happening. We also are planning on doing monthly sort of updates, so you'll see analysis reports on a monthly basis, not just as part of our sort of monthly program report update, but also like as we're doing assessments with these assessors of the milestones of anybody who has received funding within The timeframe of this program, and a month has passed since the time they received funding. There'll be a monthly assessment of that work and some analysis of that as well. Anyway, I know I just threw a con at you. Why don't I pause for a second and just see if anybody has any questions or reflections, just based on kind of all of that information, would you step in?
Krzysztof,
do we have any procedures or guidelines of how those projects should be reviewed? Like is it is the review simply, Hey, have a look and tell us what you think about it. Or is it like, hey, check those 10 things if the proposal complies with them.
So there was instructions that were given into each of the reviewers in their review sheet. Maybe we can show you what that looks like. The long and the short of it is that, basically, there's a criteria, there's a little bit of sort of direction. We tried not to be too explicit. We did say, like, here's where you can find, you know, details about what our survey said, about what were priorities for the arbitrum community, and also other places that you can find, you know, that sort of thing in terms of the, you know, constitution, Mission values, you know, for the arbitrum community, but, you know, didn't, didn't get so far down the road to say, like, you know, you should give a five to something that meets this kind of criteria, or, you know, sort of has this level of of uniqueness, you know, it, you know, Maybe we should go further down the road of offering some more guidance. You know, I think that's the kind of thing that will come out of the learnings from the group, sharing what their experience was like, and the sort of creation of a community of practice, as I was calling it. So, I mean, one of our next steps is going to be connecting all of these reviewers together to talk about the reviews that we've done and sort of what we learned from the process, and sort of how to improve on this process as we move forward. It's worth noting that, like, this is just, you know, a piece of what we're asking these people to review. They're, you know, the same people are also reviewing fire starters proposals. They'll be retroactively reviewing the stuff from the last milestone, and also we'll be reviewing, you know, the the overall work at the end of the round, you know, before final payouts go to to all these folks. So, you know, there's still definitely opportunities for us to continue to refine this process. And you know, to me, it feels like a building the spaceship while we're flying, it sort of experience where we're kind of learning and iterating as we go.
Ben, just sorry to jump in here if you want to actually share, I just shared a link with you on Slack, which was an example of one of the reviewers sheets, so that we could see the actual rubric. Thank you. And I think what I'll do, Krzysztof is, we'll also just put that in the report where we're soliciting feedback. So it would be great to also have feedback on just like the language and the phrasing and like the scale that we're using, because this will all be incorporated into designing this for future programs.
So if you're seeing one of our reviewers sheets on your screen now you should you'll notice that for each of the different criteria, there's also a prompt for comments. And in the sheet that we shared with you, there's, like, a summary of everybody's comments for all of the different projects. So kind of fed that all into into AI and asked it to summarize all these different comments and combine them. This is what the instructions look like. So this is, like the the step by step, this sort of explanation of what each of these different things mean. And then when you get into the specific criteria, there's like, a little bit of direction as to, like sort of how to interpret this particular criteria, and, you know, the specific things we're asking people to comment on, so definitely room for improvement. But it definitely wasn't just like, hey, please give us your opinions there. You know there was, there was definitely some sort of nudges and direction and some intention to try to be clear about what each of the scores meant. Worth noting. Just in case you had noticed that we were using a plus and minus scale. So it was a range between minus three and plus three, you know. So if people, people could actually lose points, basically that count towards their overall score, if they were, you know, particularly bad in one of these categories, you know. So it wasn't a scale of zero to five, it was a scale of minus three plus three. We'd actually started at minus two to plus two, and then decided there just wasn't enough range in that when we did our first set of tests. So added an additional digit on either end. So it went from minus three to plus three. And definitely interesting to see how that played itself out. Does that answer your question, Krzysztof, or did you have any follow up thoughts about it? Yeah, like
I need to have some time to dig into that. Like, something that I'm also wondering, like, if we have exact expectations about, like, how are those projects going to report on their execution? Like, are we sure that if we find something then we will be able to keep them accountable, and that will know who, like, let's say that one of those projects is like, is trying to basically rip us on money. Just assume it for the sake of, you know, yeah, if, if that is the case, if one of those projects is trying to simply, you know, extract money from us, because they assume that nobody is going to check whatever they they did. When will we learn it, and when and how will we the funds? Great
question. So the first answer is the detailed execution plan. So each one of these projects was asked to provide a detailed execution plan, and we're going to be asking them for updates against their execution plan, so we should be able to see whether they actually did all the things that they've said that they would do, as well as the details of their marketing plan, Which should also be incorporated into the execution plan. And ultimately, you know, the idea is that the funding will not be released until they hit these different steps along the way. So I mean, our next step, assuming everybody is on board, is that we will be going back to each of these grantee or grant programs who have been shortlisted, many of them gave us a range. So it's worth pointing out that like if we fully allocated all the money to each of the these programs, you know that there would be more money than we have available to distribute. So what we intend to do is have meetings with each of these different grantee or grant programs, I should say, and, you know, come up with a proposed final budget for how we're going to allocate the funding across these different programs. So that'll basically be our team, you know, looking at each of these proposals, making recommendations, meeting with each of these teams, also connecting all of the shortlisted grantee, or what I keep calling them grantees, Grant Program Managers, together. And you know, there are, there is actually still a little bit of opportunity for some cross pollination, like one of the shortlisted projects, or potential shortlisted projects you know, expressed that they didn't particularly want to run their own program, and also expressed an interest in being tied to, you know, somebody else who's doing a milestone based review. So just as one example, we might actually see two programs merge there. But I mean, basically the step that we're at right now is just the short listing of projects, and then each one of these folks is going to have a contract that commits them to their milestones, you know, the amount of money that we're offering them, you know. And you know, a payment schedule based on meeting those milestones and reporting schedule. So I think that's really the answer to your question is, you know that it'll be contained in a contract that we have with each one of them, and we'll be holding them accountable to their to their milestones. So, you know, worst case scenario, if somebody was was really just here to try to rip us off, which I don't think is the case for anybody here, but let's just for the second argument, say they were, you know, they wouldn't get any much further than the initial amount of money that we give them. And you know, once they start not hitting their milestones, they wouldn't get any further funding. That makes sense. Yeah, okay,
yeah. Sorry Just for clarity, because, like, you know, I've been on grants councils for quite some time already, and statistically speaking, there will be someone who will like, you know, not even intentionally, but basically, I don't know the plans will change. They will want to pivot. They will have other interests, but they will still like the money that they were allocated. Yeah,
yeah, I think you're right. I mean, just having spoken to, you know, pretty much everybody here multiple times. I would be very surprised if anybody pivoted anyway. That being said, I definitely have experienced that same thing myself lots of times. I think the most likely thing is actually that people run into problems with their execution, and that's probably the thing that we're going to have to monitor the most carefully and sort of be going back and forth with them about, you know, there are some pretty ambitious plans here. There's also the potential where, you know, we tell somebody that we're going to give them, you know, significantly less than they thought they needed for their program, and maybe they decided that doesn't make sense to run it, so at that point, we wouldn't sign a contract. So, I mean, I'd say we have, like a top 12, based on the approach that we're taking. But we're, our goal is to film Canada, and, you know, we'll, after having all these conversations this week, should have sort of a finalized list in the total amounts to bring back to you, and then we'll be able to contextualize that with everything else that we've, you know, got in our budget, and that should be basically the finalized budget for milestone 1b to be approved at next week's meeting. And, you know, and then kind of at that bi weekly cadence, we could be basically just like giving you progress reports on where things are at with all these milestones. You know, what we're learning from the process and whatever else might be of interest to sort of dive deeper into that is coming out of watching this roll out. Maybe I'll just take us back to that methodology for one more second, because that was the part we were really going to dive into here. So I said top 10, maybe 12. So, I mean, you'll notice here at the top these projects, until you get up to what is number 12 on the sheet, but it's actually the 10th on the page is these ones are all pretty clearly a yes. They had the right number of reviewers, they got a yes from at least four three, but many of them four yeses, and also they got a high enough score that they just seemed pretty clearly to be a yes from a wide range of reviewers. These ones are the ones that are basically right on the bubble, astral and grant ships. And these ones are the ones that pretty clearly have not passed. Some of these still have not had five reviews, but they don't look like they have a chance of getting at least three out of five to be a yes. So we will continue to get all the reviews done, just so we have kind of like finalized data here. But I think the most likely thing is that this convex voting pilot ends up getting blended in with one of the other retroactive milestone based reviews. So maybe this refi on arbitrum, which is charmverse. Maybe this one, which is on Ethel, oh, you know, I could imagine, basically this program getting rolled in the other one, that's kind of an interesting one on the bubble is this convex voting pilot. It did not pass based on the median score, but it did based on the reviewers. You know, there was people saying that they actually wanted to see this funded, but they didn't score it highly enough. And I think that has to do with just how the scores actually broke down in the the rubric that we were using. But that's the other one that's kind of on the bubble, which I think is going to need a little bit more discussion. So, yeah, that that is basically the overview of where we're at. This is an interesting one, the farcaster one, quite interested to see how that plays itself out. You know, personally, I mean, this really highlights to me the why it's important to have these decentralized approaches, because there's a few of these that I personally thought were great ideas, but the community and all their wisdom did not. So, you know, we're actually seeing sort of the result already of like things that perhaps we would have funded ourselves, but the community may have had different views. So, you know, it's a shame to see some of these projects that personally, I thought were were good ideas not get funded. But you know, I'm glad that those people still created their proposals, and I hope that they find some way to make their projects a reality. And we can, you know, maybe even share a bit more when we do the analysis, breakdown of like, what some of those interesting and novel ideas that didn't get funded work. And it'll be curious. I'll be interested to see what the community says in the in the Governance Forum, when there's an opportunity to, like, comment on and discuss some of these ideas, and, you know, maybe some of them come back as standalone proposals for the DAO or or even become, you know, applicants to some of these other programs to get funding. So, yeah, maybe I will move us on, unless there's any other questions or comments about all this right now.
Going once, going twice. Okay, milestone tracker. So I mentioned this earlier, but I'll just give you a quick update on what we have changed in the milestone tracker. To do Is that big enough that you can see it? Gotta zoom in a little bit. I right? So first thing that you will not see is that we took out those sense making workshops. You might remember, in the last meeting that we we talked about sort of whether it made sense to continue doing those sense making workshops. I am paraphrasing what pepperoni Joe said, but I think he said that he wanted to die during the last which is not the vibe that you're going for when doing these workshops. And you know, I think really what we learned was like, it's actually not going to change a lot operationally in terms of what we're doing at this point, given we already have the programs approved, we could always bring another one back closer to the end of this process, if that's something that folks think would be of value. But for the moment, just kind of based on the discussion last week, we've removed the sense making workshops from the from the table. How did a mosquito get into my office? All right, there you go. Sorry. Very distracting. Okay, other things that you will see on the milestone update here is we have added in this section under gain valuable insights. Krzsysztof, you, you mentioned that like we weren't clearly enough, talking about the moments where we were actually sharing analysis. So the first one of those that you'll see is the publish RFP selection and analysis for this Friday. And then we've added a monthly milestone analysis that happens at the end of each month. So the first one will be end of June, then one July, one August, and this was here before, but just for the sake of drawing more attention to it, this is the date where all of the program managers will post a retroactive analysis of their allocation method, like basically what they learned from running a pluralistic branch program that is really the main things that changed? I think, yeah, and basically, everything else that you see on here is in progress, or, you know, sort of on its way to being achieved. In fact, I think we could say that this is now done because we've hit our number of trusted reviewers, although we do continue to add some and design, outcome and impact assessment criteria and process that is also done, that's what we were just talking about. So yeah, the things that you'll see you know most soon are the report at the end of this week, the May monthly updates, and really just the rollout of all these programs. And like I said, next week, you know, a detailed sort of breakdown of how all the funds will be allocated to each of these programs and everything else. Any questions about the milestone tracker? I arbitrum, all good, if not. Also feel free at any time, like you always have access to this. We try to keep it up to date. You know, always feel free to throw any questions or comments in the telegram. We can always talk about a lot of these things async. We do have a couple of items at the end of the agenda that, unfortunately, we didn't give to you early enough for you to really have it as a pre read. So pre read is a bit of a misnomer, but they are requests for help, and they're two pretty important things. I will click on this one in particular, because this is something we've been working on for a while, and I've been trying to get to get to a place where it was ready to give to you for feedback, but one of them is the analysis document for their RFP process. That's basically the first draft of what we're saying about the RFP process. Big thank you to Rebecca for getting that started, as well as to Rebecca for getting this grant impact proposal to a place where we're ready to talk about it. So yeah, you'll see here that this is basically the process that we are trying to use to, I guess, a hand, sorry.
Ben I just realized you, I think you pulled up the original draft. This is actually the latest drop I can send that in the chat, yes.
Oh, thank you very much.
There we go. That's the one.
Danka, I'll change it in the agenda to not sure how I ended up with the wrong one. Sorry, too many drafts. Yeah, they
also have a very similar title, so I kept getting routed to this one.
Okay, yes. So if you want, you can look at both and see how this has evolved over time. But this is basically the way that we're trying to track, you know, the impact of work that's being done by grantees. And we also talked about this last week, I reassured you that we have been thinking about this and talking about this a lot. You know, this is all basically the culmination of, you know the ideas that Joe has put into motion. Who, by the way, Joe is still under the weather and has to pass on his best wishes to everybody. I'm guessing probably a bunch of you were talking to him directly. But yeah, here are the deliverables. You know, the following deliverables expected by the end of the program. So it's the development of the of the index itself, comprehensive review, detailed reporting. You know, this is really the process that we're using for, you know, badging people to be reviewers and then doing reviews with the whole timeline breakdown. And, you know, benefits, all the things that you would want to see, as well as you know, an example budget of what this all looks like, in terms of, like, the actual scoring methodology, the potential costs associated with that. So all of this will be incorporated into that, that budget that I mentioned. But just wanted to give this to you for your input, and then hopefully we can finalize this, along with everything else at next week's call. So I know you haven't had time to digest any of this, but I just wanted to give you a quick little walkthrough of what it was you were looking at. And why don't I just go ahead and change the link in the agenda just so we have the right one in there. Any, I know it's kind of asking a lot for you to have a comment about this at this point, but any, any thoughts or perspectives on this just while we're hanging out here on the request for help? All good. If not, we might be in that rare position of giving ourselves some time back in a web three call, unless there's anything else that is a burning need to send it. I just saw your comment about the mosquito. Yeah, if, if there's no other comments right now, we could definitely take this conversation offline. There's a lot to review and consider, you know, so maybe we'll just sort of move more of the conversation async and into the documents in the telegram.
Krzysztof, all right, I don't have a question. I actually have a small favor to ask, because tomorrow I'm doing a presentation on different opportunities in the DAO. It's for like, like a conference in Poland, academia and business. And I wanted to mention, obviously, this program and what opportunities does it, you know, provide to anybody that wants to be to get involved in the DAO, if you could, like, if I could ask you to, you know, give me some like, five points of what, what anybody that is right now, not in web three or like, is looking into web three and wants To get involved into something. How can they get involved with this program? I would gladly, you know, advertise it tomorrow.
Thank you so much. Yeah, definitely. We'll take you up on that and follow up. We might have to think a little bit about sort of what the right entry points are for, like, non web, three Native people, but that's one of my favorite topics. Personally, I'm sure we'll figure something
out. It could be also like, like the this, like, the target audience is rather people who are already interested in crypto, but not necessarily like in arbitrum. They probably traded some nfts, traded some some coins, but are not involved in DAO and like, the goal of my presentation is to show them different areas where they can actually, you know, find something to do in Web 3 more, if you could just list me some, you know, in points, some general areas where, where they could reach out to your your programs to figure out if there is something for them. I would love you to put it on my slides. That's
fabulous. Thank you for that offer. Yeah, we'll definitely follow up with you directly, and I'll follow up with you about that request for a one on one. Just moving forward, okay, well, maybe with that, maybe we can all get 15 minutes back to go finish eating our breakfast or lunch. I definitely don't have a bowl of cereal sitting half eaten in front of me. Okay, all the best, everybody. Thanks for the call and looking forward to talking more about all this, excited about where things are going with the program, and we should have all kinds of more interesting details to share with you next Week. Cheers.