Yeah, it seems like it. Okay, so we can get started on my end may I have two main topics, mainly the HAF restore process, and then the beneficiaries reward hardcap, some of which have had been brought in during the Berlin Conference.
But in terms, of course, think that think what I've been working on is mostly high minds these days. I'm trying to get the, to get the tests in order for communities types three, and two. But I'm stuck with their restoring process, which drastically slows my progress.
Yeah, that's definitely.
Okay. I think just to quickly address the restore point, I think that features pretty much done now. I believe Bartek has it merged in yet? I think but I think you said he was going to merge in shortly. Is that correct? Bartek.
Yes, exactly. And I hope it will be merged during our meeting.
which randomly banned from test failure on CI. And I spent actually, last two weeks on analysing what happens and why it doesn't work. And actually fix this library to get process much earlier, and stable. And I hope
farther development will be much more easy. Because a really significant part of my day was related to randomly, retrying jobs,
as I said, very soon, then I'll make another merge request inbound Hive version have version the HiveMind repository to be sure that everything works out. So there. So you how we'll be able to just rebase on next version.
I hope most of your development problems.
Awesome. That's great to hear. Because yeah, I also noticed the failing, randomly failing stuff. Yeah, if all of that is getting fixed. That's super good news.
So just to mention a little more, few more words. So that library is talking about the one I mentioned in my last post, I posted a post yesterday, I guess you probably saw already, which basically covers some of the stuff we've done, but it's kind of very condensed form. So it might not always be clear exactly what I'm talking about. Anyways, that I mentioned in there the lib fake time library, that's the third party library that we've that Bartek found the errors and so I didn't realise it was two weeks of time had been invested in that but I guess that's just the way things go on programming. Unfortunately. Especially if it's not your own code. You have to learn someone else's.
Yes, but But actually it was not biggest problem to fix that library biggest problem was to find why test our time timing out and the reason was that randomly hygiene out started to produce blocks. Okay, yes. And the reason was just back in such library which allow us to receive type of time lapse and this way we can produce a lot of more, a lot more blocks in the same time. So, but like verbally is very, very helpful, very useful for testing, but I input has some drawbacks. But I hope it can be used still. But our pipelines and that was test software much better than current.
Yeah, certainly we can't wait for to wait for real time during a test when we have three second block integrals. Yeah. Okay. So along the same lines, we did, actually, since I guess we're probably switching to what we did at this point, we did fix a bunch of several issues, I guess that related to random test failures, that is partex mentioned, been plaguing us and obviously plugging power as well. So I think we're seems like we've gotten past that now. So that's, I think, really good news. Long. While we're talking about testing, we've done a lot more inside the testing environment to not only in terms of fixing those intermittent errors, but sort of unifying the way we're doing our testing more across all the different applications and sort of setting up creating a standardised way for someone who's creating a half app to then introduce testing for their application. And as part of that process, one of the things we did is we really wanted to speed up, make it be able to run tests faster, because we know that's always an issue, right? The more tests you get, the slower your test time and the more applications we get, the more we need it to be as efficient as possible in terms of our testing. Because while we have decent hardware resources, we don't have infinite hardware resources for testing. So we've been making a lot of changes to essentially reuse test results where possible, instead of having to rerun them when they just no need to save. If, for instance, there's no significant change in Hive D, we don't need to regenerate Hive D data, like the state files in order to do a half test. So we've got stuff now that detects if the Hive D code hasn't changed, isn't affected by the commits that are being made. And then it can reuse that that kind of data instead of regenerating it. So makes the test run faster, or servers or test servers do a lot less work to so. And that's so that's that that keep that functionality is really useful across not only Hive D, but half and even every half app basically. So that wasn't a nice improvement. It's I think, pretty much near done. Now, if if not completely done, I guess it's probably pretty much completely done out Bartek. Is that right? I guess there's a few we haven't done it for all the all the half apps, we still maybe a few Is there any half app, so we still need to do it for
yes, it wasn't done for all apps, but mostly, this was completed. And as we talk at a Friday, Hive month wasn't yet adjusted to December today, I made some changes that to end the I think most of our applications, most important applications, other than the users Discom. And as Dan said, we will have more testing because we are developing some hyper graph applications, which are big enough to start them testing at CI, for example, balance tracker and block external tools. We required it very soon. So applying given scam, many applications that unity can improve speed up of this and simplify them.
So it was so I guess it just sort of an overall view of what we've been doing lately, a lot of our effort has been focused on half and half applications. So sort of our way of eating our own dog food so to speak, we've been creating a bunch of half apps of varying complexity in order to test the system and find problems. And of course, they're actually absolutely well, anyway. So probably the biggest one of those new in terms of new app path applications is the half face block explorer that Bartek just mentioned. And one of the things we had in mind for half apps in early in the design of half was to have the ability to take the capabilities of other half apps and reuse them inside a half app. So an example here is the half base block a half block explorer needs the ability to report balances of accounts. And so we have this other application, we designed as a sort of example app for half, half very early on, it's a very simple app. It's just purely SQL based that tracks the account balances, Hive and HPD account balances. So we needed to use that we needed that capability inside half block Explorer. So we started trying to integrate that. And during that process, we found monitoring issues with half being able to combine apps in that way so we resolve those issues. So there's one of the issues I brought up I mentioned was the multiple context handling was a new feature. And we so we added that to half and now that's been incorporated into the half block Explorer. And that's how that half block Explorer is now able to utilise the functionality of the balance tracker inside itself. And while we're doing that, we also have been extending the capabilities of balance tracker. So now not only can it track Hive and HPD balances, but it can track savings balances, rewards, balances, it tracks best delegations, it tracks up updates to withdrawal routes for power down routes. So a bunch of new capabilities have been added to the balance tracker as needed by the block explorer. But all those capabilities now can be reused any basically, you can create your own half app now and integrate the balance tracker into it if you need those kinds of capabilities inside your own side your own application. So that's that's kind of where we've been going is trying to make half into like a highly like a building block infrastructure where we can add new cape. So essentially, half app is really kind of like it's like an API. It's like a set of API calls. And so now you can mix and match these sets of API calls as needed by your upper level GUI application that you're you know, is ultimately making access to these API calls. And this is really kind of fulfilling the promise that we originally were looking at when we first I guess, early on as part of Hive, we had this idea that we should update the API, the API was kind of original API for Hive was, you know, it was created back in Steem of days and sort of ad hoc, as things happen, new API's got added and they got replaced. And so we really wanted we always talked about overhauling the API. But I wanted to do it in a way that allowed flexibility, because we didn't want to just create this horrendously huge API. But at the same time, we needed the ability to have custom API functionality. And that's really what half is doing. Half is effectively kind of our, our approach to solving a second generation API for for Hive. So if you want a bunch of the legacy API, you incorporate Hive monitor application, if you can create your own API for your own app, and then we have these building blocks, like the balance tracker, where if you want that access to that API, you can just incorporate it into your your, your code. So So anyways, the process is going pretty well, I'd say, we felt like we've been resolving the problems that come up, we're also working another thing we're working on half, besides making sure that we can do this kind of reusability, we're also interested in making sure it's very performant very efficient. And along those lines, the block Explorer was also very good for that because it does some fairly complex queries on the data. And we saw issues where we could be doing sub queries too often, which could slow down processing of the blockchain data. And so as to solve that we created this new concept, which I guess we refer to typically as structural operations types. So basically, what we've done is for the various blockchain operations, we've modelled them in a sequel types, which are basically composite records fields, you know, basically fields for the operations. And then we have parsers, that basically parse the database, if you will, maybe it's not really parsing, but we can look at it that way a little bit, it's kind of parsing the operations, JSON, and then it converts it into a structural form, it's more more pleasant for a programme to actually consume. And by doing it this way, it can eliminate multiple queries to the database, it gets into a form that once it's, once it's in that intermediate form, your SQL code can just work on these types for, for analysing the operations and processing. So we did that. We added that functionality. And we've incorporated the balance trackers now using it. And later we'll read we'll rewrite some of the other apps to use that same approach to processing of operations. I think Hive amount will be one and others as well. I don't know if we did it for the balance tracker yet. I can't remember. I don't think we did yet. But I don't think we expect a lot of gains for the balance tracker either. Anyway, so let's see. So what else? I guess in an overall level that kind of covers where we're going with half. I mean, there's there's a lot of details on what we're doing along the way. Oh, I guess one more thing that's kind of interesting that we're doing is we've, we've been we did I guess earlier, we announced that we were changing the format of the internal storage format for the operations table, which is the largest table in the database by far. And that allowed us to shrink the size of the half database significantly. But even so we found that when we did a save and restore, for instance, it was saving into a more a larger format. So essentially, the restore format was actually significantly larger than the database format itself. So we were updating the save and restore formats for the database now to make that more efficient match more closely with with the database internal sizes, and also to be faster too because not only of course if the database is smaller being written out and restored it's also it tends to be faster to process Oh, see what else, I'm not going to go over all the things that we're doing. It's not the block explorer itself yet, I suppose I don't have a real date yet for will sort of set up a public instance of it, it's still pretty early, and there's some issues to be worked out. But my guess is in a month or so we will probably have something ready along those lines. So, I guess departing from the half stuff a bit, I'll talk about some of the other stuff we're doing. We've been working as well on continuing work on C live the basically new, it's a Python based wallet that runs locally. And also beekeeper which is which, which those that wallet uses to manage keys. So one of the things we're looking at for the beekeeper is the possibility to, to also allow it to be run inside a web browser. And that's kind of a new thing. So that's, that's just early work going on there. And, on the C live side, I think we're gonna have a, I'm gonna see a beta of it, that allows for transactions probably in a week or so. And then, depending on how that goes, we might actually make, we might set up a publicans, we might basically encourage more people to play with it, even just as a tool for doing transfers and things like that. It's secure, look for problems, and basically help us out testing etc. That's probably we are doing some other stuff, but I don't want to talk about it yet. It's too early. So I'll wait for some of that stuff to get a little closer fruition for talking about too much. Oh, but yes, one more thing I should talk about. And that's the query supervisor, the SQL query supervisor. So basically, what this is, is a way to measure the impact of queries on a half database, or actually, in theory, any SQL database, and allows us to do things like determine how many when you do a query to a database, how many, how many, how many, how much of the disk needs to be read in order to or in order to process the query, how much disk might get modified as a result of the query, that type of thing. And so the idea is to use this, the sequel supervisor, as a way to time out queries, detect queries that are taking too long. Basically time them out if they're too long, for instance. And so this will be important in two applications. The long term one, which is really the key one that drove it, which is for smart contracts, when people are running smart contracts, SQL queries, we want to be sure that they don't, you know, basically tie up somebody's whole smart contract engine. So we need a way to stop them terminate them if they're taking too long. In another way, the other use of this query supervisor will be for like, when we want to create just like a public half instance, that's actually accessible that you could directly send read only queries to it to say, Make ask questions. This would be something similar, I guess the nature to I forget what it's called. But Hive SQL, I guess, arcane, just product, which allowed to which, you know, you can basically send queries to his his database, and it will give you information about the blockchain. So this will allow us to do a similar thing, except to have some kind of rate limiting on it so that we can we can make something like this pretty public and still have a way that it doesn't, it's not subject to easily denial service attacks. So I guess that's I'm sure I missed something. But that's probably enough for me to talk about right now. I can't remember anymore off the top of my head. So if anybody else wants to chime in what they've been doing lately, feel free.
I one question. I see a father. Not an issue, but a special case with savings.
The case is when someone wants to clear his savings account, let's say for example, that at least as one doesn't add that she should withdraw everything for savings, and they he says or HBD but then she still asked, let's say for example 100 LGBT to claim in 10 days. So 10 days later she's not able to claim or interest because she has no more liquid HPD she can read his brown them but she needs to wait three days and it's kind of a loop because when she will will bow or edge Biggie there will be new interest and so on. So
it's pretty minimal.
You Yeah, but it's pretty annoying.
Okay, well, I'm not sure that's a you can create an issue to describe the issue. But if I find I'm not sure what I fully understand it, but if I understand it properly, it seems like it would be for a relatively very small amount. In its final form.
Yeah, but you have to do it over and over until there is let's say a few cents left. But you cannot clean it because they will Unless
you need to claim exactly as again, where you can claim your interest if you're a bit too late in their new interest created, right,
but I mean, what you could do is just claim your interest and then withdraw it right?
The problem is, you just got it automatically claiming or something like that instead of me.
Yes. But timing is key at this moment, if you don't want to be left with a few cents or a few HBD.
I mean, I think it's simple, right? You would you turn off auto claiming, which, you know, isn't even a feature of the blockchain or anything. But you know, a lot of people are using the auto claiming features of like keychain and stuff like that. But if you just claim your interest, and then I guess, would trigger withdrawal, I think the problem is mostly solved, you might have to do it one more time, because you gain some additional interest in the meantime, before the three days before it comes out. But it's only three days of interest that shouldn't wind up being much I wouldn't expect.
So maybe I'm misunderstanding the problem. So if you want to write up an issue that describes it more fully, I can, just to be sure, I'm not misinterpreted the issue, but it's what I'm thinking it sounds quite, quite small.
Yeah. Okay,
as a question, will we make account creation tickets transfer or not?
I don't want to do so. Honestly, I've never been a fan of them to begin with, I think they were created as a as a scheme to essentially get steam it to not have to create a to do what it said it was going to do with the Steem that it had, which was, namely onboard new users. So they created this idea, we'll just get free tickets away instead. And then we could stop having to pay for that.
That's always been my view of that issue, seemed kind of obvious the way they designed it. But on the other hand, I know people do make use of them. And I haven't made a strong push to get rid of it entirely. Because I guess people found it useful in some ways or another. But I don't think making it transferable is a good idea. I think that only cheapens it more, and turns it into essentially some kind of money grab for large accounts to be able to get free accounts that they can resell for cash. So it's never been something I've been very interested in making them transfer.
Okay, that's all for me.
Okay, any questions about any of the stuff I talked about? Especially if it's any to Hive stuff? I mean, the HAF stuff, because that's going to impact? I think. So. I don't think a lot of the guys here are using HAF too much. I know Madahari is, but he's not online. So I'll probably get questions from him offline about it.
I'm just curious on next hardfork for the recurring transfer changes.
Yeah, yeah, that's a good question. I don't we haven't said any. We haven't talked about it really recently. At least I haven't talked about it with either Ohio, how or Bartek about it too much.
I don't know that it's a great time to be being an annoyance to exchanges. Maybe I'm wrong, and maybe it's not a problem. So yeah, I guess it would depend a little bit on the importance of the you know what how important it is to people. And, you know, we can take some kind of poll of what people think we should do when we should do it.
I haven't written I haven't finished writing what I need to write to make use of it yet. So I'm just kind of workout schedule.
Yeah. So I mean, I think I suspect, I don't know. I need to ask Howo and Bartek about this part. But I think that part is pretty much in a good shape. Is that correct? Howo?
Yeah, on my end, it has been merged on develop and as far as I'm concerned, it's ready. Okay.
Short of normal testing of you know, release before release of the whole whole entire blockchain which obviously we have to go through before we do hard fork. Yeah, okay. Yeah, that's good. So, yeah,
I'll pipe up again when I'm actually written what I need to write. Okay. Sounds like I'm the only person kind of waiting for this. Really.
Yeah. I don't know of anybody offhand. That's that's been asking for it. So
okay, good question. Any one else?
Yeah. Hello, guys. Can you hear me? Yeah. Yeah, sorry. This is Mcfarhat this time I'm joining so it's a pleasure being here. I sorry for the voice quality because I'm under the weather exhibit. So
actually sound pretty good. So. Oh, good.
Thank you. So I actually had a question regarding HiveMind. So we've been playing a bit with notifications via HiveMind
on Actifit, which we've had for a long while our own custom notifications, which are off chain.
I mean, yeah, they're just we store them in our own database and just notify users with notifications. So I've been looking into the Hive mind notifications. And we actually ran into an issue where we noticed that some of the notifications are not actually being sent out to the users. We did double check on several apps like peaked and others. And the the the notifications don't seem to be coming through like a vote notification gets missed or reply. Is this like a bug or a known bug? Or? I mean, anyone is aware of this issue?
I'm not aware of it. But I don't, I haven't really looked into the notifications a lot to be honest. I mean, I'm aware of what they do. I read my notifications. I see. I guess I normally see votes on my, on my notifications page. But I don't know if I'm seeing all of them. That's for sure. I've never looked at the code for how that notifications are processed. Particularly, I guess, Bartek, I think is probably the most familiar with it.
Yes, I think best is to describe in the show specific case, notifications are lost, and we can then try to analyse the scope. Maybe the scope seems to be quite complicated, because it is implemented in SQL, using right levels views to aggregate data and make things working fast. But also, we will get exact cases when something was wrong. We can definitely ask her if that's bad, or some unexpected behaviour, for example, but you have to say that now.
Okay, I'd be happy to fire in a book. I did mention this on the mattermost A while ago, but yeah, I'll look into it again on Sunday. So the reason we're talking about this is we were trying to create some sort of an add on or a plugin that would pop up for notifications within browser to show the users I mean, when they receive a vote or or any type of
real time real time notification. Yeah, exactly what you do a push notification.
Exactly, exactly. So we thought it's a good idea. But when you notice that many notifications are being missed, so it might not accomplish the the purpose. Yeah,
I actually wonder, I wonder if it would be interesting to support those kinds of notifications that the Hive D level two is an alternative to Hive mind if you're looking for push notifications, because we've built in this notification structure inside Hive D now, which basically, you can register with Hive D to receive notifications of various events that happen. And basically, this totally bypasses high bond even? I don't know, I'm sure we i Sure. Right. Now, for instance, we don't push notifications for votes, though. I don't know exactly what we do push notifications for it. But it doesn't seem like it if if it works, well, it might not be possible that we could just add some options to Hive D to allow direct notifications to those of you know, you could basically set a variety of things, you want to be notified for REG to be notified for them or something like that. It wouldn't be something there'll be enabled for every Hive node, but maybe one, you know, like an API note of some sort of personal API node, for instance, not not a public API node that's driving an app, but I can see something like that being potentially interesting to do. And
I think you notifications, what design articles to notify about changing states of Hive D? Not exactly. Changing state for separate accounts. But truthfully, here I see. I just have application, and that seems to be the best solution. And don't don't yeah.
It's also Yeah, I don't disagree me, this is what?
Definitely if you think that, please write an issue and we will take a look on it.
I guess it's an interesting point, too. So how much is how? How? How much does the notification stuff rely on other parts of Hive mind? Is it something that can be stripped out easily, or with some effort?
You are asking for hygiene notifications?
No, no, I'm talking about so right now. The notifications are built into Hive mind is part of the entire Hive mind, but it strikes me that it's probably a relatively separate feature from most of Hive mind.
Yes, exactly. It's actually it was latest or API offered by Hive mind and specific to condensor and showing notifications on use. So that's only design if or that particles and also its mark, some notifications are sullied and in another request, you can latch on last number of months. That's the problem I have to say without looking for.
I was just suggesting is it might even I don't know that it's worth the time or effort right now. But it's something that potentially could later be separated out to a separate sub app is what I'm thinking.
Yes. Similar to reputations away from HiveMind to HAF.
Yeah. Yeah. Yeah. Like I said, nothing urgent or anything like that. But just an audio head when I was thinking about it. I think most of the table other tables,
I think that kind of makes sense. And that this this brings My other suggestion is that like notifications are currently a bit limited, in my opinion, you know, you would look for in Notifications, like something concerning balance transfers, like some incoming money, someone sent you, some Hive some HBD a power down that took place, you sending out funds like that this could be a major security hurdle. If someone for example, to call for your account and send funds out. So you you would know, you would get them? Nation nation? Yeah, yeah, exactly. So So having this available, whether on chain or via house mind,
app, or half app, this would be a pre added value. So that all of a sudden you get a notification. Hey, funds were out a I didn't authorise this. So you would just go and check and take measures to prevent this? Yeah, absolutely.
So that was a thank you.
Anyone else have anything questions or things they want to discuss them are working on, etcetera? It's not, I guess we can move on to how I think How did you get through all your questions? How or do you want to talk about them? Now? I can't remember.
No, but we can talk about the well, it's one thing that came up during the Berlin meetup recently.
Yeah,
I didn't know what that was. Yeah, let's talk about that. So yeah, basically, it's people a bunch of people on chain. I've heard about this multiple times. But it was brought up to me again, I using beneficiaries as a way to reward people, like team members, etc. And which is favourable because you can reward them using like high power directly, which incentivizes them to stay. So I'm not going to go through the entire thing. But long story.
I mean, it's fine. I think I'm just one person or group people here. Feel free to explain the whole recently.
Yeah, fair. Yeah, basically, the they really like paying people, artists, contributors, like creators, etc, using like one post, that then has been very set up to like of decorators. And that becomes like, payment in a way. The issue is that right now, beneficiaries are limited to eight people maximum. And so as as far as I know, I don't know why he is a researcher limits. I remember, this was brought up like, probably a year or two ago, and we couldn't vaguely familiar. Yeah, I believe it couldn't really find why there was such limits. And it was probably an hour your best guess was like, it's probably something that was estimator rolled the dice. And that was it. Yeah. Yeah. And so yeah, basically, do we have any strong opinions? In regards to increasing that to
I have? No, I mean, it sounds there's no performance issue. I don't have a problem.
It is performance.
Please, tell us more.
I don't think I can.
Okay, so see you. Two years ago, more or less? Someone did that research already. And it is because of performance right.
Now, it's quite recent. We'll talk that's next meeting or two. Okay,
sounds good. To me. It's not like an urgent matter anyways, and it's definitely not going into the next hardfork. And so, yeah, I'm down. If you can, like keep that at the back of your mind and give us like, a good explanation as to why why not? So we can like explain to the community. That'd be great. Sounds good. That's it for me.
Okay. I guess we'll give an obligatory a couple minutes for anyone else has anything to talk about.
Thinking that's No, although it wasn't a few minutes, I think we can still conclude anyone could have unmuted by this time. So thanks guys, I guess how are you going to I guess create the transcripts etc as normal?
Yes. Well, I created is more about the AI will do it but hey,
I give you all the credit. Now to
be fair there is a lot of late rereading that worthy Sure. And it's we're not there yet but it has played within like multiple people. So it's just like one type of text and you have to like really listen to the recording and I like Braithwaite said, this post says,
One day, identify the speaker.
I think some some tools do but I haven't managed.
I have a I have a subscription to otter, which I use for legal stuff. So if you want to ping me, I'll run it through otter, which will do speaker recognition recently.
I've been able to pick it up. I can send you the
message me tomorrow. Send me the recording straight after and I'll say
timing is always good.
Sure. Yeah. Alright, I'll send you
actually, for podcasting 2.0, where there will be some really kick ass and transcript generating stuff coming very, very soon. So I might use that.
For a code like this. Each participant could be recorded in separate track, and it will be easier but I don't know how
the AI picks out voices very well nowadays. Yeah, I'll help with that tomorrow.
All right. Awesome. All right. Thank you, everyone. Thanks.