Rcast 26:Towards a Living World (Part 1)

    6:39PM May 4, 2019

    Speakers:

    Greg Meredith

    Derek Beres

    Christian Williams

    Keywords:

    behavior

    validator

    lifelike

    life

    staking

    rho

    called

    reproduction

    crossover

    genetic algorithms

    progeny

    food

    continuation

    terms

    functional

    parallel

    talking

    notion

    point

    process

    Welcome to Rcast the official podcast of the RChain cooperative. RChain is a complete concurrent blockchain platform designed for maximum efficiency and minimal computational and environmental costs. By utilizing proof of stake, I'm Derek Beres, the director of content for RChain. Each week on Rcast, I'll be talking with the founders of our portfolio companies, as well as Co Op members, staff and other figures in the RChain ecosystem, about the most pressing issues in blockchain today. Please check out RChain Co Op to learn more about the platform, our community validator sale and information on becoming a member.

    In the last, not the very last Rcast but the one before we spoke about error correcting codes and kind of gave a surprise twist ending as to how it related to perhaps some larger issues. Actually

    hope this will be sort of a two part series. So I want to talk a little bit about using biological metaphors to think about the sort of distributed consensus platforms that we've been talking about with proof of stake. And Casper, obviously, since the previous work was was all about getting liveness constraints, fitting those two together, but then sort of opening up to that, that wider perspective, the larger perspective to see that these two lines of research actually fit together in another way. And that other way is an attempt to begin to develop a notion of a living world a lot of times is sort of cannon, the western science, life doesn't exist at the ground life emerges as a phenomenon. above that. I want to propose an alternative view in which life really is available sort of at the ground from the ground level up. We don't have to jettison or discard

    What we've come to expect that of scientific rigor and scientific precision and formalism, mathematical formalism in order to address and more living world. So So that's kind of the larger perspective. let me dive in into some of these biological metaphors to sort of help help get things rocking and rolling. Part Two of the overall talk is life like agency and consensus. Recently, my attention was attracted to an article in which group I think it was it Indiana, but I'm not I can't remember exactly where they were based, had built some lifelike materials. So the material was self assembling. It was self replicating and metabolically controlled, pretty lifelike under many respects. The question came up for me; can we formalize this notion of life like behavior and that is the

    starting point for this piece of the work that I'm trying to lay out here. The first question is, is mere resource constraint lifelike on the slide that I have in front of me and others can see when the podcast is up, if we write down some rho calculus processes, where we have a process that is you know, hunting for food from some food source, right. So, for food from S, whereas is the source to P. So, if we just make p be that minimal thing which is to to hunt for food, and then continue if you put that in parallel with an environment that supplies some finite amount of food, so that would be P(s) in parallel with E(s) where s is the channel along which the food gets transmitted. If E is really just the parallel composition of a bunch of food generating events, sending food on S

    In parallel, maybe n times, then after n transition steps, that parallel composition just goes back to being P(s). Does that make sense? Christian?

    Yeah. Yeah, you're just eating all the food.

    Exactly eating all the food. But the interesting point is that the P is not very lifelike because if you put it back into an environment that supplies more food p just continues, it's hardier than a tardigrade.

    can't ever die. And that's sort of, in my opinion, you know, helps us understand what is lifelike. Lifelike processes, need to be able to die. A thought would be to do something that's kind of two level. So on the outer surface of this lifelike process, it's searching for food, but on the inner behavior, the continued some sense the continuation of having received food,

    we convert that food into energy, we know we might want to look at, you know, the conditions under which, for how that energy is deployed, you can kind of check and see, well, if we've got enough food, then we'll convert it to energy and make it available through some metabolic channel. And if there isn't enough food, again, this is just a cartoon, we're just sort of characterizing the rough shape of life, there isn't enough food that will trigger some kind of shut down process in the case of cell dynamics, that would be called apoptosis, which kills the process, we can make that decision independent of the the continuation of the process, right? So there's a decision making apparatus and that's running in parallel with a selection or a choice where the choices either we get energy from the metabolism or we get, or we get the signal to trigger apoptosis and we stop separating the decision from the continuation behavior is kind of important because it, it makes it modular, right? And then we, you know, we might compare that to another behavior where we're searching for food, and we just check to see is this enough food in which case we continue. And if it's not enough food, then we stop that sort of a one level process rather than a two level process with it doesn't really have any real internal logic, other than checking to see if that's enough food, there's no autonomy inside the continuation doesn't have two parts to it that are effectively independent, but interacting,

    I can't decide between different actions to how to use the enemy.

    That's correct. That's exactly right. If we think about it, one of the interesting things is that metabolically controlled continuation, you know, so the word continuation means sort of the continuation of the life like behavior. You know, it's not the only discernible feature of life. Life also, autonomously reproduces. You could imagine leaving the decision part alone. So the decision is checking to see if we have enough food to continue or to shut or whether where we need to shut down. But but changing the continuation behavior so that it's more refined, you leave in the options to just continue as per status quo or two to shut down. But to add in another option, and now let's imagine that we have two channels, right, so we have a metabolic channel, and we have a reproduction channel, and then we do a join on both of those. So we're, we're going to get some energy off the metabolism, and we're going to get some other energy off of the reproduction channel. And then we'll sum the two energies together. And if it turns out that we have enough energy, whatever enough means from this sum, then instead of just continuing, we will duplicate this lifelike process, it'll do more than just continue, it will, it will be like a cell undergoing mitosis, it'll split into two,

    do you mean that you are receiving energy from the act of reproduction, like you're not thinking of necessarily one individual but a small group and reproducing is increasing the total energy or you're receiving energy that's devoted, specifically to reproduction,

    receiving energy devoted specifically to reproduction. So we imagine that the metabolic channel and the reproduction channel are effectively internal channels to check that we do when we have when we sum the two energies, one from the metabolic channel, one from the reproduction channel, either it's enough, in which case we go ahead and reproduce with the cell divides, or we take the energy that we've got, which is the sum of the energy that came from the food and the energy that came from the reproduction channel. And we put it back out on the reproduction channel. So we've squirreled away some of the food energy for reproduction. And this is very similar to what we see in reproduction in life, the organism that's capable of reproduction has to deal with this race between feeding the host of the progeny, the parent of the progeny, and feeding the progeny. That's a real trade off that happens for those, you know, like in the case of sexual reproduction, the female has to deal with the fact that some of her energy is now going to the child, her metabolism often adjusts pretty significantly as a result of that. So again, this is just a cartoon picture, we know that the Rholang code that I have on my slide in front of me is very, very simple. But it does sort of capture something essential about these characteristics of life. And again, you can divide this into an independent analysis and decision making mechanism, versus an action that's taken with respect to that decision, which allows one to independently refine those two modules, that turns out to be something that happens quite often in life like settings, because these kinds of control structures in the biological world are complex, and it's difficult to get right. Then, so wanting wanting to preserve structure in a certain way, while adapting structure gives rise to this kind of split. If you get an analysis and decision making procedure nailed down, you can reuse that across a wider range of cases, just by changing the action part, that kind of reuse is, is favored by natural selection. And then it also turns out that, you know, metabolically control continuation, and autonomous reproduction are, again, just part of the story. Life has functional behavior, one of the ways that we think about this, physicist doesn't really talk about the purpose of the orbit of a planet, or the purpose of the gravitational effect of the star. But when you go up through the ranks, to biology, biologists are all but forced to speak of the purpose of the immunological response to the viral infection, it's hard to avoid that kind of language. When you're talking about lifelike phenomena. There's this question that arises, that has to do with can we modify our lifelike behavior to account for functional behavior. With the Rho calculus, it's actually pretty easy, because we can we can make it parametric in functional behavior.

    So instead of P being parametric, just in the channel for food source, it's also parametric in some functional behavior. And we will, let's say we supply it as a name. So it's parametric and asked for the food source. And at B, or B is the some some functional behavior like, like the web web weaving behavior of a spider, that's an example of some functional behavior, in the case that it does have enough energy to continue. And the case where that energy is not being used for the reproductive process, then when it continues, it also engages that functional behavior. So so the continuation is, is p in parallel with whatever that functional behavior might be. And the nice thing about this is again, you know, it's like, the code that I'm looking looking at on my screen, it really hasn't changed very much. As I as I add these characters, these lifelike characteristics, the code itself is remaining relatively stable, which sort of gives us some degree of confidence that, you know, we might be in the right direction moving in the right direction.

    The other thing to point out is that if I go back to the original Rho calculus paper, and I look at the way we've encoded replication in the rho calculus, the interesting point is that the code that we've written down is not much more than elaboration of that code. In fact, in the paper, I point out that the the encoding of pi calculus rep replication that I've given runs away, it's a runaway process. And that reflects the semantics that Robin gave for replication, you know, but nobody ever liked Robin semantics for replication precisely because it was infinitary object to what you really wanted with some kind of guarded replication. And this is, you know, and I point out that this could be done, but I, I don't supply the code. And you can see this as me finally saying, Okay, well, if you haven't been able to write it down, here's a, here's a version of it.

    You know, and so it's interesting, because cuz it was, it was always, this lifelike harness was sort of always present in the rho calculus. So what have we got in terms of our life like characteristics? Well, we've got metabolically control, continuation, autonomous reproduction, and functional behavior. But life also adapts, at least on planet Earth. A good chunk of that generational adaptation is genetically controlled evolution. The interesting thing about the rho calculus is that it gives us the reflective part of it, and you have a process, say, B(s) and @B at the like, like we've been talking about, if you took the code of that process, that gives you something that is akin to the genome of that process. So if we think of the process itself as the phenome or the creature, then the genome is represented by the code of that process, here, we've modified the code of that process, we would get a different phenome, we get a different creature.

    That's a cool idea.

    It's a lot of fun. And we can go pretty deep in this. But before I go down that, I just want to point out that Now this does, in fact, relate to Casper like validation. In particular, we can now implement the notion of slashing as a form of starvation.

    So if we have a validator behavior, if we starve that validator of food, then that corresponds to slashing and one of the things that's been very tricky with respect to proof of stake, in general has been how to represent the fact that validators are absolutely going to want to de risk their investment. And so it's hard to force them into a situation where one staking address equals one instance of the validate your

    behavior.

    And if that's not the case, then you run the risk of not having a secure network, because you don't know how much compute is really associated with a staking address. But here we can be quite explicit about the size of the population associated with a staking address. And that staking address is kind of explicitly represented by the s, the food source address, that is the parameter of this lifelike process. But the detail it gets, it gets even more fine grained in terms of how this analogy maps on to the way we think about Casper and Rchain. So in particular, we can think of the food is the token. And the energy that the food is converted to is gas. So the internal metabolic transformation and the internal metabolic regulation of the continuation validator behavior is done in terms of gas, which is our notion of energy, the food is the rev or whatever staking token is being supplied in order to run the validator. But the analogy goes even further, because as I've tried to suggest, staking corresponds to starve the process of food, then it will go through apoptosis, it will, they will eventually shut down, it won't be able to reproduce, and it won't be able to engage in any functional behavior, then the nice thing is you can do what we sometimes they're called slashing to the bone where, where you cut the steak all the way down, and the validator can't do anything more. Or in this case, you can you can starve it so that only some of its population dies off. You don't have to slash all the ways to the bone, you can have consequences of behavior. For example, if a validator equivocate, you don't want that behavior to persist at all. So you want to wipe it out. So the validators lying, you know, saying one thing to one side of the network, and another thing to the other side of network, so that double spend could happen, do you want to wipe out that population of behaviors, but if the validator stumbled, and you know, there was some corruption in a block, so it either created or promoted an invalid block, that may not actually be the fault of the validator, right, there may be some extenuating circumstances. And so you could imagine punishing it for not having caught that problem, but not punishing it to the point where, you know, it's just removed from that pool. And, you know, again, life is a wonderful analogy. The other the other side of this that's really interesting is that the functional behavior is now serving client requests, when we run this extra be thing, that's a contract that the validator has decided is worth running, the mapping of the life like behavior, which we did entirely from first principles in terms of, you know, how to model life has now given rise to a harness that captures all of the salient features of the validators down to some pretty fine level of detail. So let me just stop there. And just check in Christian does what I say make sense?

    Yeah, it's making a lot of sense. One thing I was wondering is, on this current slide, when you're splitting off a behavior from metabolize some food, it seems you would also need to attach some resource dependence on to that behavior, or it would somehow needs to remain tied to the organism. Otherwise, you could end up splitting off tons of behaviors and be doing multiple things at once.

    Yeah, I was so glad you mentioned that, because we address that in the subsequent slides. And it gives rise to some really interesting analysis of the the perfect segue in order to sort of increase the narrative tension. Let me let me defer my answer for a bit. And and return back to this idea of using reflection to develop a notion of genetically controlled evolution of the validated behavior. And then I'll come back to what you're talking about, the two ideas fit together in an interesting way. So one of the things that we mentioned earlier was just that reflection gives us a handle on a way to represent the distinction between the genome and the phenome. The genome is the code of P and the genome is P. for those people who are not not as familiar, maybe haven't heard of it, there's a line of computer science research called genetic algorithms. And this was founded by john Holland. It has a seminal book called adaptation and natural and artificial systems. And one of the things that he does is he character rises evolution as a kind of search. So he says that there's, it's a feature search, and then he analyzes different approaches to search. So in particular, you could look at mutation as a way to search the feature space under selective pressure. And he compares that to what happens with sexual reproduction. And he identifies genetic operator called crossover where you have two organisms, each of which have a pair of genetic information. So let's say we have a, you know, a mother and a father, the mother has two parts to its genetic information. And the Father has two parts to its genetic information. When they reproduce, the progeny can take part of its genome from the mother and part of it from the Father. And this makes sure that there's an exploration of the features of the mother and the features of the Father. Holland analyzes this and shows rigorously, that feature search that is predominantly based on crossover with a tiny little bit of entropy from mutation is far superior to mutation by itself. mutation by itself is largely a random walk and generally ends and bad results in a feature search this subject to pressure, but somehow crossover has a way of strengthening itself or guiding it. So that's exactly right, there's a little bit of subtlety, the genetic information ultimately needs to be able to expand, you were talking about crossover plus mutation. And the mutation every once in a while has to expand the features available. There's some subtlety there that I'm not going to go too much into the details about that. But I heartily recommend that people go look at the area of genetic algorithms, and how they apply to what's called genetic programming. So using genetic algorithms, to have computer programs generate computer programs. And and I heartily recommend Holland's original book Adaptation, and Natural and Artificial Systems. But I just want to point out that using reflection, we can define crossover for rho calculus processes in general, which means we can define it for the functional behavior, just as an example, if I have a functional behavior that has a par. So if I have B, that is equal to A B sub m for either male or mother, depending on which way you want to take this in power with a piece of F, which is either female or father, so I'm being completely agnostic as to the order in which these are placed. And then you have be prime, which is equal to, you know, Visa VM prime, and in parallel with a piece of f1, we now have that by apartheid structure that was necessary for the crossover operator. And so I can define a collection of progeny where you take the components from the mother and the fathers. And so you know, this is again, this is just, you know, competent Oryx, right, so, you can get a B sub m in parallel with a B sub f prime, or B sub m prime in parallel with the piece of F. Or you can get the piece of him in parallel with a piece of in prime or the piece of ass in parallel with a piece of f prime. So those are the four possible progeny. And you can do a similar thing for for comprehension, right, if you pair up for comprehension, so you can define a crossover operator. And in fact, for all of the two part, term constructors, you can define these genetic operators.

    comprehension is is really interesting, because not only are you mixing the continuations, but where you're looking. And then vice versa, your you can evaluate

    No, I agree with you, I think you're spot on because you get more progeny. Because you have this ability to turn code into running processes and running process into codes, you have more total options for generating progeny. That's an important observation, I think that you made a really good observation when we talked about this a while back, you said that, if you recurse these operators, that might be a way to generate new alleles.

    yeah, you could completely interlace two codes, and that would produce a ridiculous amount of possible progeny. And then somehow you want a good way of selecting from, from that randomly or something

    that's correct, like like, so you can imagine applying this to the generation of tests I have often used in testing a technique called fuzzing. In fact, I, you know, I've got a whole approach to this where I define a mechanism whereby you generate a stream of terms, so you, you give not just one, but an infinite stream of terms, that stream is seeded by some initial set of conditions. So So here, we're talking about generating that stream of terms from this genetic, like reproduction. But we know in the in the case of testing, we have a clear sense of what it means for a test to succeed. It means that when you put it in parallel with some program that you're testing, it causes that program to fail in some way, or to behave in a manner that doesn't comply with its specification. And so you can now run this whole mechanism. And whenever a tests is successful, then it's selected as more likely to reproduce. If you put all the pieces together, you get this infinite stream of tests that are getting better and better at breaking your programs.

    Which is kind of cool. Yeah. So now we can tie it back to the question that you raised earlier, which is, how do we control the functional behavior part? Right? So the functional behavior part is a smart contract that is been deployed by a client? How do we resource constrained that, and it's related, because if you analyze how reflection is showing up in the road calculus, it shows up as a loop in the syntactic constructors. So you get a kind of fixed point. We talked a little bit about that. Mike sort of pointed out this construction, when we talked about the rho combinator stuff last week, you can do that loop here, at a minimum, you could pick out a B that is also metabolically controlled. You could have a be that is an iteration of the validator, harness this lifelike validator harness on top of some other more constrained functional behavior. And that's how we answer your question, right? I can just articulate B so that it's, it's only allowed to run in a nice metabolically controlled approach. The next piece of the puzzle that's really interesting is that we can make this a fixed point, you know, we started with the idea that b is equal to the life like process applied to s and some be prime or be Primus the actual client request. Now we can modify that and say that, that this recursive operator, say R(p) is defined to be p ( s )of our P. So now we've got the loop. This gives you this infinite tower of validator behaviors.

    I actually talked about this with jack Eck, who worked with me on the dao bug paper, and did some initial work on on rholang, that you could have this infinite tower of validator behaviors. And here now we've we've sort of written that down with bitten the bullet and said, well, what's what's a validator, like behavior look like? Okay, here's what it looks like. And then it turns out that it's written in such a way that you can actually characterize this tower as a fixed, fixed point of this recursively closing up the equation.

    What does this tower mean? Exactly?

    What it means is that, you know, you're, you've got a wider notion of validation, right. So it's like a tower of simulators, each behaviors being simulated at the level outside of it. And so it's very, very reminiscent, the reflective tower of 3-Lisp. So for people who are interested, Brian Cantwell Smith, came up with the idea of reflection, initially and coded it up, he was a student at MIT coded up as3-Lisp. But he expressed it in terms of this infinite tower. And it was it was quite mysterious to a lot of computer scientists until Friedman, one published a seminal paper called the mystery of the tower revealed, in which they give a non reflective account of the tower, which is kind of similar to this fixed point construction that that I've given here.

    They don't tell you about this kind of computer science. And

    it's good fun stuff, I heartily recommend that people read Brian Smith, PhD thesis. And then the mystery of the tower revealed, Brian built the language called 3-Lisp and then Freeman and Warren came up with the language called Brown, that is, gives you the same kinds of behaviors, but with this non non mysterious presentation. Interestingly, also, we can classify different lifelike behaviors, the two ideas, the sort of genetic programming idea, or the application of genetic algorithms, and classification actually go hand in hand, just recognizing that a lot of biology is, you know, essentially categorizing things, classification schemes, right. So, kingdom, phylum, class, order, family, genus, species, that's a template, or a rubric for the biologists to sort of give some structure to the zoo of life forms. But the ladle approach, the generation of types, allows us to come up with some types that cover at a minimum, the functional behaviors, could imagine using this both at compile time to insist that the only functional behaviors that get attention fit, compile time characterization, you know, have a certain a certain behavior, but you could also do it at runtime, right. So you search for behaviors that match a particular type across a stream of submitted client requests. So this allows the validator to be selective, in terms of the functional behaviors, the client requests that it's accepting. I think that's a considerable value, right? Because it allows it allows for the evolution of a market, right, where validators are favoring certain kinds of client requests on the basis of what it is that they do.

    That's

    that's an advantage that is not currently contemplated, and others smart contracting platform.

    other platforms, they just have no way of distinguishing and they're, they're just validating, required.

    They're just validating. Yeah, like in Ethereum the validators don't have any viewers insight into the client requests themselves.

    Wow.

    Yeah, this is a fundamentally different approach. But also, it's different, because we can lift our genetic programming techniques at the term level, to the type level. All those crossover operators that we defined for terms, we can also defined for types. And this is because what we what we sort of discussed earlier was that every term constructor that was bipartite, right, it took, you know, either two processes, or two names, or a name and a process, those all can give rise to obvious crossover operators. With ladle, every term constructor is also a type constructor. That's kind of one of the main points. And so we can lift all of our crossover machinery up to the type level. And this is important, because we can now do this co evolutionary thing, where we're looking to classify things at the same time that we're looking for things.

    An example for me, just for my own experiences, the mathematical process, right, so. So my inner dialogue is something like this must be true, right? And then another part of my brain says, Wait a minute, what about this counterexample? Right, and then, you know, the other side goes, the players is proposing theorems. So these are types. And then the opponent is proposing counter examples. And those are terms, there's a kind of type term dialogue that's happening, which is co selecting for classification schemes, and counter examples to those classification schemes. This is the kind of thing that we can now do, we can automate with the rholang machinery with the rho calculus machinery. And I think that that's incredibly powerful, both on the scientific side, you know, to go and do a search for lifelike computational behavior and classify it. But also, it's incredibly powerful on the blockchain side itself,

    you could probably utilize huge results in game semantics

    makes the search really nice.

    All right. I agree. 100%, one of the things that I have talked a lot about, in private, but not so much in public is, you know, my opinions about deep learning. And its its approach to AI. When I was growing up in computer science, ai was not just neural networks, there was a much bigger field, including genetic algorithms. You know, I become rather sad that deep learning has, has dominated the scene recently, because it suffers, it suffers. It's not compositional. It's like if you have a neural net that you know, recognizes the age of faces and a neural net that recognizes the expression of faces, it's not so easy to combine them into a neural net that recognizes the age and expression of faces. Now we can we can contemplate some some different architectures there. But the the point of things like the rho calculus, or the lambda calculus, or some of the, you know, the various computer science machinery that we've applied in the RChain setting is that it's compositional from the ground up. So you have composition ality built in, you don't have to go and discover composition ality later. And that turns out to be fundamental to composition. compositionally defined structures typically coincide with monads, or, you know, at the very least some kind of endofunctor. So trying to find it post facto is often misguided. To put it bluntly. So here, we got this compositional set. And then the other the other side of this is, you know, I do understand, Mike has told me that there are there are some techniques for trying to find the why of a neural network. But here we can, we have the building block, we could actually say what the notion of an explanation is, right. So a run of this classification, counter example kind of thing gives rise to a crisp notion of explanation. And we've we've organized it so that we can, we can get a why. And we can compose. And we've done this all in, you know, sort of an extremely compact form, in a setting where where we can resource constrain these genetic algorithms. So we can resource constrain these search techniques. And I think that makes for probably some of the most powerful AI building blocks that you could have.

    So that's why I think this ends up being kind of fun in that regard, but there's a there's a wider picture that I want to get to next time, just as a teaser, we can take these ideas about life like infinite towers, of simulations, and reapply them to our our notion of an emergent space time and start to get a really different angle on how to think about physical phenomena.

    Amazing stuff.

    Well, we're, we're having fun, right? I mean,

    the whole point of research is to is to have at least have some fun. Probably probably most of the ideas or have, you know, are wrong. But but maybe maybe once in a while. There's some there's some gems in there. And but we've got to have some fun along the way. In any final thoughts or comments. On Your Side, Christian,

    I just really appreciate that. This is your guiding perspective. I think it is so important. It's not a coincidence that that it it's going to be a natural and like better framework. And I think it's going to make programmers think differently about how they're designing all this stuff, when there's this clear analogy, and they're the programs are not just floating by themselves, but they're explicitly like these organisms that are, you know, subjective and constrained. And I just think you showed a ton of really great ideas today tonight. They're just

    awesome.

    Very,

    very good.

    Well, let's, you know, hopefully we'll get more and more people counting on them and we'll see if they have any merit at all.