This podcast is brought to you by the Albany public library main branch and the generosity of listeners like you. What is a podcast? God daddy these people talk as much as you do! Razib Khan’s Unsupervised Learning
You know that genetics plays a huge role in our health and more people are using genetic testing to determine risk for diseases like cancer for themselves and their kids than ever before. So I want to tell you about Orchid, it’s the only company that does whole genome testing for embryos testing before your child is born. If you're doing IVF, this is a clear choice now, because now you can reduce risk for 1000s of single gene disorders, including heritable forms of autism, pediatric cancers, and birth defects. Check them out at orchidhealth.com.
Hey, everybody, this is Razib with the Unsupervised Learning podcast. Today I am with my friend Nicholas Cassimatis, actually Dr. Nicholas Cassimatis. But before we start, I do want to say one thing I say this every now and then, but not that often. Can you please rate my show on Spotify and Apple, if you're listening it, listening to it on those platforms, I don't know what else is out there. I used to use Stitcher, but they shut stitcher down. But basically, if you rate it positively, it's good. If you review it, it's even better. And the reason is, you know, more distribution and people can listen to, you know, you know, great podcasts with individuals like Dr. Cassimatis here. And so I just want to say that don't say that that often. I know, it takes like a couple of minutes out of your day, please do it. I don't have an ambition of getting up there with 1000s of reviews. But I'm in the mid, like 150s I think right now. So on Apple and 144 on Spotify, I'd like to get a little higher, I know that there's 1000s and 1000s of people that listen to this. So if just you know, every other person could write a review or review this, that would be great. Okay, that's boring. I want to talk about interesting things. We are gonna be talking about artificial intelligence today with someone who is steeped in the field, you know, I introduced Nick as Dr. Cassimatis and I know, Nick, here in Austin and the tech scene a bit. So just so if I seem a little bit fresh with him, I'm not being inappropriate. He's just a friend of mine. But he is also a doctor who has a PhD from Massachusetts Institute of Technology MIT, under Marvin Minsky, who, those of you who knows something about Popular Science and Artificial Intelligence, that name will ring a bell. We can talk about this, but my under my impression is he's probably the most prominent artificial intelligence researcher in the second half of the 20th century. But, you know, Nick knows, well, he probably has his own biased opinions, but he was a postdoc Naval Research Lab. He was a professor, he was an academic at RPI, in cognitive science and computer science. And then he, you know, went into industry worked at Yahoo, on deep natural language, and then Samsung, he was the head of AI research. And now he is the the founder of a startup called dry.ai, which I do use in various capacities. And we will talk about that later, you have probably seen dry links here and there, on the, you know, on the on the newsletter, basically, you know, I wanted to talk to Nick today, because there's a whole aspect of him, that is not surfaced very much, I think, at least in the social context that I see him, and which is his academic aspect as a researcher, and you know, all the stuff he did, and all of his insights, and he has certain opinions and conclusions. But you know, people like me that have opinions and conclusions about artificial intelligence, we don't really know anything, to be honest. And I'm sure he has certain opinions about genetics, but I don't really know much. So this is I'm trying to give the analogy here of why it's important to listen to the area experts, even though we should be also careful appeals to authority, but Nick knows this stuff. And so I'm gonna try to pour some of that knowledge out in this podcast. The first thing I do want to ask you, I have notes that I put in here about this actually, what is artificial intelligence? How do you define it? And also what is deep natural language? These are words that you know, people think they know, but you know, often these, you know, straightforward words. are misleading when they're taken out of a colloquial context. So why don't you just like start from there?
Sure Razib. Yeah. Thanks for having me, by the way. Yeah, artificial intelligence. It's a hard thing to define. To me, it just means making computers do things that we would characterize as being intelligent in people. That simple, basically, you know, a lot of people wanted to define intelligence when they start talking about AI. But, you know, really, there's no really good definition of life, it really doesn't matter if your definition of life is the study of biology. I mean, most people study biology, and it might have some memorize definition of biology, of what life is. But it never really affects the actual details of the science, like whether a virus is a living thing or not, really doesn't really matter when for any actual scientific inquiry into like, how viruses work, how they spread, et cetera, et cetera. So, so I don't worry too much about what AI means. But yeah, that's basically I think, how I characterize AI is get computers to do things we call intelligent.
Yeah, so I think this semantic issue is more for people on the outside, because in many cases, like you said, I mean, this is I don't want to sound like a hater. But I will sound like a hater, philosophy of biology. You know, or philosophy of science, in general philosophy of physics, whatever. No one's really pays attention to it when you're doing the science because you're a practitioner, and you're doing it. So this is like, you know, you know, Supreme Court saying that they know, and something's obscene. They know porn when they see it, you know? So obviously, like, nobody on a porn set is, what is pornography? Like, everyone knows what's going on here. So with artificial intelligence, like, you know, we could talk about the Turing test and all these other things. But, you know, the reality is, I'm assuming when you're in the lab, or, you know, when you're working on the computer and whatnot, it's just like, you're not like having deep philosophical. I mean, you're not I mean, just, most people don't have deep philosophical reflections. But, you know, researchers are trying to get publications out there. They're trying to solve hard problems. And so they're probably not going to, actually I know, they aren't going to, you know, mull over insoluble problems, which is what a lot of philosophy is to be candid. But so what is what is deep natural language, though, again, deep, natural, and language, all these terms are straightforward. But together, what do they mean?
Right. I think when people first started using that term, about 20 years ago, probably I would say, what they - and when I start, I really started using myself my own research about 10-15 years ago. And at the time, most systems that most computers that understood natural language, understood it at a very shallow level. So the field of natural language processing was very good at determining whether classifying the part of speech of a sentence of a word, or it was very good at classifying whether what the topic of a document was, was it about finances, about sports was about weather. And so at the end, so our goal at the time and my own research and my labs research was to figure out how to have much deeper understanding of what is being said what people are talking about, so that you could do a much better job of, you know, conversing with them, solving problems, and being useful. So that's really what deep natural language means. It has actually nothing to do in that context with deep learning.
I see. Yeah, there's only a finite number of words out there. And so things start overlapping and people get confused. I just want to clarify that. You know, I know. So you worked under Marvin Minsky. I mean, he's a famous dude. He was one of the type of people I mean, I think literally, he was he was on the edge.org website, you know, part of Brockman's you know, intellectual scientific salon. So you you got your PhD, you know, you have, as they say, good pedigree. So you have you are a doctorate under a scientific celebrity, but also someone who made a lot of original contributions, kind of pioneered modern artificial intelligence from what I remember reading after World War Two. Obviously, there's other people out there. But, you know, Minsky looms large. I know that there are arguments and theories about the different types of artificial intelligence for philosophy, and I vaguely remember reading stuff about top down and bottom up and all these things. Can you just give us a general sense of, I don't know, the, let's say paleo AI, you know, like AI in the 60s 70s 80s. Before you came onto the scene, before, you know, all of the things like neural networks and everything like that were involved.
Sure. Yeah. And Marvin Minsky was involved in a lot of it and was actually on a lot of sides of it actually quite controversial about it. So there was a there was a boom in neural nets, actually, I would say, early 60s, I don't know the exact dates but that gives you a timeframe. Um, and Marvin Minsky wrote a book called perceptrons, where he and Seymour Papert proved certain limitations of neural networks. And that caused the field to abandon neural networks for a while. And, and what you see now, what you saw after that was this, what's generally called symbolic AI. And that means several things. So for example, one way of characterizing it, so I'll just give you some broad examples, how people will characterize that. So one of the earliest approaches was based on search. And so the sort of idea there as if you're playing a game of chess, for example, and you want to find the best move, what you do is you think, okay, if I, you know, what are the my possible moves, let me imagine I do each of those moves. And then let me imagine the moves that my opponent can make. And let me do that, you know, for 10 steps forward. And let me therefore search through the, the sequence of possible moves I can make, my opponent can make and find the most optimal move. Another example would be of search would be, if you're trying to plan a path from you know, like your GPS does, to plan a path from from Boston to New York and your driving, which you would do is you'd start by, you'd search all the possible routes. So you go, Okay, I'm in a certain street right now, I can take three streets from here. And imagine what would happen if I take each one. And again, you search for the possibility of doing all through all those possibilities. So that approach to natural language to us he's been to AI was based on search, it was quite powerful. And that is what a lot of you know, a lot of people identified AI as being search algorithms, where you just searching through spaces. And so So right now, for example, when you ever you use your your phone or your car to navigate somewhere, you're using what was once an AI - what was once considered an AI algorithm. And so that's one approach to AI. Another approach to AI was based on theorem proving. And so the idea of theorem proving is in AI is this. Let's say, for example, you want to solve a problem. And so what you could characterize it as a theorem proofing problem. So basically, the actions you can take are your axioms. And the current state of the world are the facts that you take to be true. And the desired state of the world would be the theorem you're trying to prove. And then so you feed that to a computer that can prove theorems. And the proof that generates turns out to be a sequence of actions that will help you achieve your goal is kind of abstract. But it turns out that you can form you can characterize a lot of problems, as is, you know, theory proving problems, and that has a lot of great benefits. You know, once you characterize things in terms of theorem proving you can make all kinds of proofs about, you know, what a system does, what it can be guaranteed to do was gonna be guaranteed to correct or not being guaranteed to correct. And so that's the opposite of neural nets today, right? We're basically the problem we have with with LLMs, and ChatGPT and so forth, is that they make stuff up. And we really have no way of knowing why they're making stuff up. And whereas these other approaches to AI that I'm talking about were prior to that, actually, were very explicit about the actual knowledge they had encoded into the system. So there were a couple other there's several other kinds of symbolic AI, but I think that gives you a flavor of what AI was once compared to what it is now. And there's some trade offs. But but that hope that answers your question.
Yeah, it's interesting. You know, I know about the whole search stuff, I think anyone who's on the edge of tech knows the importance of search and search algorithms in modern technology, it's interesting to point out the theorem. You know, solving - Theorems are a big deal. Obviously, you know, the origin of real modern mathematics, and they go back to the Greeks, but for various reasons. The American academia or like, you know, educational establishment. We have de emphasize theorems. So for example, like, you know, inductive learning and math as opposed to deductive. So when you're saying a theorem, I think a lot of people kind of know what you're talking about. But they haven't done very many theorems, like, I haven't done very many, you know, I've taken applied math, but they just kind of stopped emphasizing proofs. Over the past generation or so. In contrast, with search, I think a lot of people are just imagining what's going on. They feel like they have an intuition and sorting algorithms and all these other things are a big deal in computer science. And so if you have friends that are in in tech, that are devs, you've heard about that. So it's interesting how, you know, these two things have different salience is I think to us in 2024 and that's just because For the practical application, I want to go back. Can you tell the audience what a neural network is? Because, again, neural that sounds like biology, but obviously, it's not biology.
Sure. Yeah, neural networks are at best a crude approximation of how neurons actually work. I mean, it's the very best but but neural network trying to think of the easiest, simplest way to characterize this would be this. So let's say, let's say the simplest example of a neural network, let's say we're trying to do to, we're looking at sort of pieces of fruit, and we're trying to decide whether they are raspberries or whether they're oranges, say, right, let's say, let's make that bananas or oranges. And so and so let's say there's two features that matter to sort of help you decide that one would be the color of the object, and the other one would be its shape. And so you can imagine having basically having a graph, if that means anything to people. So basically, you have, let's say, you have a neuron, and a neuron encodes the color, and the other neuron encodes the shape, and then you feed that into another neuron, which encodes the category of the object, and so and so basically. And in so let's say you're trying to decide whether it's a strawberry or not, when you have the color red feeding into the strawberry, or - so when the color of red neuron feeds, when the color neuron feeds into the, to the, when the red neuron feeds into the category neuron, you want a high activation of the category neuron of the so it's a strawberry, when this sort of yellow neuron feeds into the strawberry neuron, you want a low activation, because you don't want yellow to make you think of strawberry. So the idea is, you can think of all these neurons that encode either concepts, or they encode things that you see in your environment, or pixels, for example, in your retina. And they're all connected to each other this way, and they're sort of charging each other up. And the degree to which they charge each other up, decides what kinds of inferences you make about what's happening. And the challenge of training a neural network is to sort of decide how much one neuron will charge another neuron in such a way that it makes the correct predictions.
Yeah, I mean, wait is this? Are you would could you say that you're training the neurons? Or is that not the right word?
People say you're training the network. And the way you train the network is deciding how much one neuron will activate another.
Yeah, and I want to go back to neural networks, because I feel like, you know, they burst onto the scene, then they kind of, well, I mean, disappeared a bit. And now they’re back. And I want to talk a little bit about that. But first. So, you know, we got some, you sent me some notes. And you know, there's various things that I want to touch on. So what was ELIZA? I think I've read about ELIZA, but I forgot about ELIZA like, it seems very familiar.
Yeah, it's good that you mentioned that, I think it was in the 60s might have been early 70s. But there abouts. Someone created a computer that, you know, a computer program and you'd go there and you talk to it. And it acted as a therapist, and you would say something like, you know, oh, I just had an argument with my husband. You know, he was complaining about XYZ. And it would say, Oh, tell me more about that. And it would have these simple little rules that would say things Oh, like, tell me more about that. Or it would say things like, oh, what does that make you think of? So basically had a few. It was basically something that had a few canned responses to what you would say. But it was intoxicating to people, people would sit there and have conversations become emotionally involved with the program. So it was really the first probably the first chatbot ever. Certainly the first one that anybody ever noticed. And it was quite it was it was it caused a lot of hype in AI at the time. And something that you know, taking a step quick step back, people don't realize AI has been through several hype stages we've had, you know, you might remember the the Watson hype stage and then when, when the 90s when AI beat humans at chess, there was that phase. There's lots of phases of hype. This is the biggest one we have right now. But there was an early one caused in part by ELIZA and and so I think the lesson a lot of people drew from it at the time. Well, the immediate lesson was, Oh, my God, computers are really smart. They understand us are very emotional, they're gonna take over the world. But pretty quickly, people saw the limitations of that. And so then the lesson was, you know, some the lesson people should have learned from that was some pretty shallow - People are pretty shallow actually. They will attribute humanity. And they will anthropomorphize things that really don't deserve it. I think that's part of the appeal of pets, frankly. And so. And so basically, I think, and I think, you know, in some way, chat GPT, although it's 1000, it's a million times more impressive than ELIZA, and extremely useful in a lot of ways, still, people's first encounter, and makes them think that it's as intelligent as a person. But when you actually try to get it to do a lot of normal things people can do, it can't do them. So there's just this lesson that there's a difference between sounding Smart and Being smart. And I think ELIZA was really the first thing that sort of exploited the fact that people sometimes infer people, things that sound smart are smart.
You're alluding here, I think, to the Turing test in a way, right.
Yeah, it's a similar point. It is a similar point, where basically, if the Turing test, you know, I think , you know, I really haven't read the actual original Turing text, so I don't want to miss attributed to him. But what people tend to think of the Turing test is that if I give you a computer and you interact with it, you think it's human, then that means it's intelligent. And I think that's wrong in two ways. One is you can be intelligent without being human like. And two, you can sound human, and actually not be that intelligent. And you can even sound intelligent and not be that intelligent. I mean, in our, in our normal lives -
Razib: Many Cases
Yeah, exactly. I mean, everybody knows someone who, who really doesn't isn't very smart, but they're really good at regurgitating smart things other people have said, in fact, that's what most people in engineering, think about people not an engineering in the humanities, right. And so basically, you know, people, like when they're judging people are able to sort of separate sort of actual intelligence with maybe sort of regurgitated intelligence. But but they need to get better at that with computers as well.
Yeah so we've been at this since I mean, so I looked it up Eliza was 64, to 67. And, you know, I remember these sorts of chat bots, even when I was a kid in the 90s. And they were really cool at first because it was like, you know, is this a computer? Is this a human? What's going on here? But it's more about us. These chatbots are telling us, it's more about evolutionary psychology, human psychology, in a way, because ELIZA was, I mean, the source code, they just recently rediscovered it. They used to not publish code back then. But, you know, they had to use like, you know, abstraction and other things. And you know, all of the the basic stuff that you see, but I mean, it's programming language, it's a programming language. That's just like taking inputs and outputs. And like, people think it's human, or the people think it's intelligent, like, This is crazy. But you know, what is intelligence? What is I mean, these things are, it's kind of like you know it when you see it, but then like, people see intelligence in computers. And, you know, the pundit Matt Iglesias likes to say, all of these criticisms of why, you know, GPT is not a lot of the criticism, why GPT is clearly not intelligent, apply to a lot of humans. So it's like, it's like, people are too stupid to realize, well, how do I know that you're intelligent or conscious, you know? So it's like, the whole thing is just kind of like, It's kind of like a mindfuck. You know, it's like, okay, wait, wait, wait, what's going on here? So I think artificial intelligence is fascinating, because, you know, obviously, we care about it. We care about automation. But also it comes back to us. And it's like, wait, wait, what's going on with us? Like, how would? How would you know, it's a science fiction story. This is a science fiction story where artificial general intelligence emerges. And it concludes, humans are not intelligent life. You know, it's like, it's not implausible, right. And I do feel sorry, by the way that we are not a YouTube show. Because as you were talking about neural networks, I was imagining some schematics that I've seen before. And you were kind of like moving your head. But I think you were imagining the same thing in your head. So neural networks are usually illustrated with certain schematics. And it's like, much easier. That's right. You know, a picture a picture of that case is worth 1000 words, and so I apologize for that. Anyone who's interested just just Google neural networks, or go to Wikipedia, you'll see it and the image makes it really clear, but I wanted to bring them neural networks because, you know, I read stuff in the 90s, I think 90 node 2000s By now Gary Marcus, who was a big AI skeptic. Now he's cognitive scientist, psychologist, or he left academia awhile ago, dedicated he was, you know, one of his books about cognitive science psychology that I read, actually, was really, really skeptical of neural networks. I think it was written in the late 90s or early 2000s. But the point is, you know, the hype cycles have come and gone come and gone come and gone. And I'm of an age where robots and Artificial General Intelligence, you know, was supposed to show up at any moment. And so you know, you start to get cynical. But, you know, there was Deep Blue. Watson. I remember actually the Watson hype cycle that was a, I was an adult, then it's very clear the Singularity Institute sponsored a kind of a viewing of the Jeopardy episodes. So for context for people, Watson won at Jeopardy, right?
Nick:?That's right.
Yeah, so that was a big deal. But obviously, Watson is nothing like GPT. It didn’t use the same technologies, or the same underlying, you know, LLMs and transformers, all those things weren't around then. So it was a totally different thing, what I remember, I think, Watson, or deep blue, which I think also is IBM, I think they were supposedly going to do protein folding and unfolding. Like that was what they were ultimately there for. And just just so the listeners out there understand protein folding and unfolding is super important, because DNA turns into RNA, which turns into proteins. And proteins are, you know, you can create pharmaceuticals out of this. But a lot of pharmaceutical drug development is basically iterative. It’s trial and error. That takes a lot of time. So if you can figure out how the DNA maps onto a protein product, that will shorten a lot of the trial and error, and that's why computation has become more important. More and more important, and I believe, Deep Mind, the more recent iteration from Google has, you know, gotten at this somewhat. So there are some real practical applications out there, that that AI is important. Going back to Minsky, I want to ask you, how did what what years did you get your PhD? Like, where were you when you were working under him?
Roughly 97 to 2002.
So how did he feel about the field then? Like, what was your sense?
Yeah, no, I mean, he spent almost all of his time talking about how he felt about the field. So at the time, you know, we think of machine learning as being this sort of least some people do this sort of more modern thing, but at the time, that's when neural networks, so I would say the 90s, were basically when neural networks were, were really - or not neural networks, machine learning, were starting to become the dominant part of AI. And what that really meant at the time was, rather than doing these kinds of reasoning, or search based things based on knowledge that I was talking about earlier, they would not bother trying to give knowledge to the computer, but instead, they would give it a ton of training data, and hope it would sort of infer knowledge. And that was becoming dominant in the field, and Marvin Minsky hated that, and that, so that was one of his main themes was pushing for reasoning and knowledge, and planning. And another one of his themes was diversity versus uniformity. And what that meant in that context was everybody else, everybody else, you know, the general pattern is somebody invents a certain kind of algorithm or class of algorithms inside of artificial intelligence, and then they try to claim that that algorithm will be able to do everything. So, you know, right now, we obviously are living in a world where a lot of people in AI are trying to push deep neural nets to be able to do everything. And that's what their belief is. And that's sort of the foundations of, of their, you know, it's a premise behind a lot of their work. And sort of in his thesis was actually what you needed to do is not have one method that could do everything, but actually find ways of creating systems that created combinations of these methods. So his famous book, “The Society of Mind” meant that that the mind rather than being one specific, monolithic algorithm, was instead of a collection or community of algorithms that were working together to be intelligent. So those were those were the two things I would say he was, he was sort of arguing for a kind of decentralization, and he was also arguing for a kind of, you know, he was arguing against big data. Let's, let's call it that. Yeah.
So, I mean, yeah, big data. I mean, machine learning is like your training these machine learning algorithms, and, you know, anyone who, you know, reads my stuff has seen plots generated with machine learning, where, you know, it's not put it the information is not put in there a priori. So for example, okay, I want to like find population structure in this set of individuals, and, you know, you just put in some parameters in there, and then it like, looks in there, and it creates these clusters. And, you know, it does it by, you know, maximizing the likelihood or something like that there's a function in there that it's maximizing, but you don't know what's going to come out. Because it's based on what you put in, you know, that's right.
That's right. And you know, those are those machine learning items are extremely useful. I'm not trying to argue that they're not useful. And neither was he, I think in the context of actually trying to make computers that are intelligent as people, which was, you know, was his goal, you know, think, you know, observe, for example, chat GPT, super impressive, but it does take billions of words of data, training data to train it. You know, hundreds of billions, I don't know, have the exact number in my head, but it's but humans learn language with, you know, 100,000 or a million words in their training data. So it's several orders of magnitude, you know, it's almost like, yeah, it's several orders of magnitude more data than people have. And, and so how is that you know, what's so there must be something else going on, besides what deep neural nets are doing and how people learning. And also, the other thing to realize is, you know, this gets back to what you were saying earlier about, you know, the Turing test, when deep neural nets, excuse me, when LLMs and things like ChatGPT start to sort of show their limitations when you try to have them reason or plan or actually solve problems. I mean, if you actually are trying to plan a specific trip with a neural network with, excuse me, with ChatGPT, for example, it's not that great actually at keeping track of where you're going, and when, and your connections and this or that, and a lot of your preferences and all that, whereas - and so in contexts like that, you have these other algorithms that are more useful at that kind of thing. And I think what Marvin was trying to sort of encourage people was to get, you know, better using those kinds of algorithms. And people are very good at those kinds of algorithms. So you will get kids who are five or 10 years old, who can out reason and ChatGPT and a lot of context. And so that's anyway, so that's, I forget how we got onto that. But that is -
We’re talking about machine learning, you know, I want to say this is a very fascinating point. And I think, you know, what it reminds me of, I'll use an analogy, Fermat's Last Theorem, obviously, you know, part of it was lost, and so they can figure it out. It was short, we knew that it was short. So when it was solved by Andrew Wiles, it was very long. So Andrew Wiles got to the same endpoint in an incredibly torturous way/manner using extremely complicated mathematical fields that were developed within the last couple of decades. While Fermat got to it quick, now, it could be it could be Fermat was just wrong. That's one hypothesis. I think I'm, I'm not gonna lie, I actually wonder if that's the case. But it could be that he was right. And he got to it in like, very few steps somehow. Okay. And so to me, that illustrates, you know, you can get to the same endpoint. So what LLM is that large language model, and I, you know, I've done multiple podcasts on this, I don't want to be labor what it what an LLM is. But basically, you know, LLMs like, they brute force their way into seeming human and to seeming clever, we are not brute force. I mean, like, our brains are light. You know, they're packed well, they're sloppy in some ways, but they're incredible in other ways. And so, you know, we achieve a state of seeming human very efficiently. I mean, imagine, I mean, yeah, I know that we take, we got some electricity going through our system, we got to eat, but we use a lot less energy than these, these big, big dumb pieces of metal, I'm still gonna say they're dumb, because they're not controlling the world yet. And I hope they're not like, offended. But in any case, machine learning. So, you know, I have a friend who works in agricultural genetics. And she's always saying, like, you know, it's just like, it's like a buzzword, like, regression is machine learning, you know, how would you define machine learning? I mean, you kind of went around the edge, but let's, let's let's drill down?
Well, there's the more literal definition, the most literal definition is basically that, you know, computers, by interacting with the world, develop a capability they didn't have before they started interacting with the world. So they started out not being able to do something, not knowing something, and then they observe the act, they plan. And then basically, after a while, they can they can, they can do it. That's, that's the most general and there lots of ways to learn, you can learn with lots of big data you could learn by, you know, just speculating in your head and trying to prove the to do - or simulations. Yeah, no, exactly. Simulation is a great way of learning. But I think colloquially today what machine learning means definitely in the field, what it means it means using huge amounts of data to train computers to be smart. Yeah, so that yeah, that's that's what machine learning is.
So we didn't we didn't we didn't have so the theory. So you know, you know, I like to example, I like to use is a lot of phylogenetic programs today. They use a MCMCs is and the MCMC like algorithm, the Markov Chain Monte Carlo. You know, I like to say, and people have heard me, I mean, you could have done this in ancient Egypt if the Pharaoh was like, we're all you guys are all going to compute MCMCs right now, you could have done it, you could have brute forced it. But the reality is, you need a computer to do it efficiently in a realistic timescale. At some of these things, you're talking to like search and stuff like that, you know, people who have worked in the field or adjacent to what they know that you can't do it exhaustively, you're gonna run out of like, all the time of the universe. So a lot of these methods are based on how to do shortcuts, and sampling the space in a way that will make it tractable. Okay, so MCMC is you could do it, the theory was always, I mean, the theory was there a long time ago, but implementing it is a thing of the last couple of decades. I mean, it's machine learning like that, where the theory was always there, but we didn't have the computers.
It is actually yeah, I mean, there was a thesis. So backpropagation is the underlying algorithm behind a lot of deep learning today. I won't explain what it is, but it just is what the algorithm is. It is the core algorithm. And somebody found a thesis, I think, from someone who applied statistics guide Harvard, roughly 1970, that actually had backpropagation. in it. He didn't call it that he didn't have the neural net metaphor, but he's basically backpropagation. And then roughly in the 80s, David Rumelhart rediscovered backpropagation in neural networks, and, and there was a big boom of neural networks starting, I don't know, roughly in the mid 80s. And, and then that sort of, and there were some nice advances there. But then that sort of, you know, reached an asymptote but stopped the shopping advances. And then we rediscovered neural networks, again, I don't know 10-15 years ago, in some popular way. And there was more advances. And at the beginning of that first wave of advances, say 10-15 years ago, a lot of the advances really came from just having more computational power. I mean, there were other there were small refinements was mostly computational power. Since then, there have been additional advances. But yeah, no, I mean, that's the that's it's very all this is very vexing to sort of somebody who's been in the field. So I'm embarrassed to tell people I do AI. Because it makes you look like a follower or trend. I was doing it when it was an AI winter
You are, you actually are not a tool.
Exactly. Well, literally, when I started working on AI, there was something there was a big hype cycle and investment cycle with VCs in the 80s and 90s. Mostly in the 80s. Mostly in, in artificial intelligence. And then it collapsed. There was a there was a there was a bust, an AI bust. And they call it the AI winter. So literally, when I first started doing AI, everybody told me not to do it. It was career suicide. And it was quite a challenge, actually, in a lot of ways to just get a job doing AI. And so, yeah, so I've seen basically over the years, but I also have read before my time and observed since after my time that basically, you know some in some Marvel come out it was ELIZA once it was it was Watson some other point it was the Deep Mind the chess playing Deep Blue, I think it was called. And everybody gets really excited to hyped about it. And there's there's a big sequence of advances. And then there's sort of a hangover. And we rediscover the same things over and over again, like literally people. And another interesting thing is Geoff Hinton is sort of it was his lab that a lot of the most recent sort of Deep Mind stuff started in 15 years ago. But back in the 80s, Hinton was actually against backpropagation because he was for unsupervised learning versus supervised learning. I could explain what that means. But But basically, it was unsupervised learning as you might know, Razib means when you when you learn without training data, and supervised learning means when you know, the computer makes an inference and you say what it was right or wrong. And you're constantly giving it that feedback. And so Hinton - deep neural networks are very much about supervised learning. And Geoff Hinton was about unsupervised learning. So just lots of ironies in the field. Lots of people who think he invented AI, even though he's, you know, one of the best people in the field. Yeah, AI was invented long before him. So somebody needs to sit down and just just dissect all this sort of false, conventional wisdom in AI right now, just sort of as a, you know, sort of a reality check and how good sort of actual Popular Science is that actually conveying stuff?
Well, I mean, I've heard of that. Backpropagation. I've actually never looked at what it specifically was and I just looked so I'm going to give you the one sentence in Wikipedia with the overall gist of why you do not want to explain it. “Backpropagation computes the gradient in weight space of a feed forward neural network with respect to a loss function.”So I get it. And I started looking, I mean, I've done, you know, some like linear algebra and other things like that. So I matrix multiplications not, but it looks a little gnarly. So,
I mean, there's a simple intuition behind it, which is that there's this thing called gradient descent, which you might have heard about as well, you definitely heard about in statistics, which is that, alright, I have a model, and it has parameters that generate a prediction. Let me tweak those parameters. So - and I run some predictions off the model, I see which direction they fail in. So I tweaked the parameters just to the movie a little bit more in the direction I want to go, it’s a graded decent, it's like you're climbing a hill,
Razib: the hill climbing algorithm,
A hill climbing algorithm. And so back propagation is just multi layers of that. So rather than having, so in classic regression, you have sort of your, your input and your output, and you're doing you're correlating those two, and you're using hill climbing to find the best sort of fit. Backpropagation is several sequences of that. That's really all it is.
So I mean, you know, I don't, I think this is true. But, you know, I had a friend who was a computer scientist, who was also studying genetics in the 2000s, he would always say, evolution is basically just like a hill climbing algorithm, you know, and like, that's how we think about adaptive landscapes and maximizing, like fitness functions and all these things. So you know, nature's one, like the basic underlying abstractions are very, very general. So before we get, you know, into more contemporary issues, I guess, can you tell us what transformers are?
Sure. Yes, so let me find a simplest way of putting this transformers are, let's take a simple example. Let's say basically, you want to translate French into English, you could create a neural network whose input is French, and whose output is English. And there's these middle layers that are being trained by the, you know, by your neural network training algorithm. And they're basically transforming French into English. And the beauty of that, is those internal layers are finding some encoding of the meaning. And the actual structure of that of what's being said in French. That's that you and I don't understand. But you know, is useful for the computer, for the neural network. So anyway, that's, that's what transformation is, at some level. And so and so basically, you can, you can, the way sort of transformer architectures work is that they're, they're in when you create this internal layer and a transformation. What's nice about it is that a compressive, super features superficial features. And so for example, if I have a long if I have, let's say, a paragraph in French and a paragraph in English, but I can condense it into sort of, let's say, 100, vector 100, you know, 100 Bits and a vector, or 100 dimensions and a vector I can I can get, so if I have basically something that I’m normally representing with 1000s, of sort of 1000s of bits of input, but I compress it down in to 100 bits. That sort of compressed layer actually is - what it’s doing is encoding the meaning really, of what was said. And so transformer networks are basically exploiting that kind of compression and encoding to generate a lot of intelligence stuff. That's, that's, that's the best I could do colloquially, I think,
Yeah. And it's, you know, it's a relatively newer, newer system. So I just wanted to check, Google Translate it switched into transformer type, you know, operations around like 2016. And, yeah, I do think like, as you're talking about this, I know that it's difficult because, you know, these are technical. And now I can like, really concretely understand why visual spatial skills are important in math and stuff like that. Because like, Yeah, I'm imagining how we would do this on YouTube or something like that, where, you know, when you say a vector, I know what that is automatically, because I have enough math, but, you know, some of the listeners probably don't know automatically what a vector is. And I'm not going to ask you to like, describe a vector, whatever, if you don't know you don't know, look it up. But, you know, what I would do is like, I would cut away or I would like, you know, do like a split and like I would show a vector as you're talking about a vector and everything would be understandable anyways, neither here nor there. But, you know,
Here’s a more colloquial way of thinking about it is that basically, if you think about an important part of conversation is context, I think, right? So if I say, you know, you know, the bug is big, if I'm in a computer programming context, then the bug means a problem with my code. If I'm in sort of biological context, that means an insect, say, of some sort. And so context matters a lot. And the problem is neural networks, when you make their inputs too large, stop working. And so therefore, you can't give as context to a neural network all of human knowledge, or you can even give us context to it really, everything that was said in the past 10-30 minutes of a conversation, every single word. So what these transformer layers do is that they compress, basically, context into a small enough input that you can feed it back into the neural network as context. So the reason these neural networks are doing so well, in part is because they're doing much better at context, and they were doing 20 years ago, and this transformation that lets you sort of compressed the context in a way they can feed it back into the network is basically part of the part of the secret sauce.
Okay, okay. Yeah, you know, we've been talking about a lot of technical things. By the way. There is a podcast called Supervised Learning that I think has more reviews than mine. My goal is ultimately, to surpass that podcast, but we'll see if I mean, it's a really good name not gonna lie. Yeah, it's, uh, yeah, it's on security AI, it's got Daniel Mestler 421 episodes. Oh, wait he's got 122 ratings. I have more ratings than him now. Yeah, that's what I'm talking about. Sorry. 421 episodes, he must be starting to be annoyed. Because like, it's a really good name, he got the brand. And now like, I have like another podcast. And it's not as like, you know, it's more. So, you know, I picked the name, because I was trying to describe the way I have lived my life, which is with minimal supervision, you know, so it wasn't like the technical meaning, but whatever, I still liked the name. Let's talk about, you know, stuff that's a little bit easier to grasp for people like what they're interested in. So, you know, as of, you know, I'm talking to you right now. But the other day, I talked to a friend of mine, who's a Doomer, pretty much a Doomer, I'll give you the quote, this is Jim Miller, the listeners will have already listened to that podcast when they're listening to you. You know, so Jim Miller, basically is an economist at Smith, he's been talking about, like, you know, transhumanism, and all this stuff for a while, and singularity. And, you know, he basically believes that the next like, 10 to 20 years are going to be great and that AI is going to transform everything, and increase productivity and all this stuff. But, you know, he thinks that there's a 90% chance that we will be exterminated. I asked him what his P-Doom was. And so he thinks everything's gonna be great. Until we're rendered extinct, like, like, right up until that moment. So you're smirking. And you know, I know you well enough to know why you’re smirking. But what is your p-doom
Right, I think it's unknowable. It's I think, I think there's two flaws in that argument. But having not listened to his argument, I will infer, I'll guess that there's two flaws in it. One is, he's assuming technology is farther along than it is. And I don't think it is. And the second flaw is, he's probably he's assuming that basically, you know, intelligence, an artificial intelligent being will spontaneously become selfish or malevolent, or, or, you know, or ambitious, or something. And so I think those are both wrong. So I mean, it's kind of hard to sit and argue, I've given you some, you know, vague arguments, or, I mean, regarding how good the technology actually is, I mean, it's been 14 months now. 15 months, however long it's been since Chat GPT came out. And you and I both have friends, and we have been in social settings, where, at the time, everybody thought, you know, it would just put a lot of people out of business in five years, and I have informal bets with people about five years, are we gonna have project managers and secretaries. And I say we will. And, and so I think, you know, we need to sort of be a little sober about it's been 14 months now. And, and, and so for example, you know, ChatGPT looked like it could code really well. But actually, there hasn't been like a panoply of new software that's been created because ChatGPT, for example. And there's lots of other things that haven't happened yet. It doesn't seem like we're necessarily going down that path. So there's some sobriety there that I think needs to start setting in. And then okay, so but let's just assume ChatGPT, or, you know, LLMs or something else, or some combination that aligns with something else. We'll just assume there's some advance that makes computers super intelligent. or at least as intelligent as people? Does that mean they're gonna sort of exterminate people? I think it's up to the designers to some degree. I mean, there's no reason why intelligence has to be malevolent. I mean, I think the reason why, I mean, dumb animals are malevolent too, right? So in the animal kingdom, right there, plenty of animals will eat each other and are quite savage. And, and so humans, I don't think we're any more. I mean, you would know better than I would. But I'm pretty sure we're actually not the most violent animals, animals in the animal kingdom, it's just that we have more technology to sort of industrialized our violence, and do it at a larger scale, so basically, that's so that's the thing, I think. And there, we also know plenty of people who are, who are not that intelligent, but are malevolent. But we also have plenty of people who are very intelligent, quite kind and sweet. So I just think this association with intelligence, the reason, the reason we have intelligent beings on earth, that are basically bad people right now is a result of natural selection, the competitive environment in which they evolved, not a result of anything intrinsic to intelligence. So I think you can and so I think, to me, the real danger about AI, isn't that it's gonna spontaneously decide to exterminate humans, I think it's someone will, will invent it first invent AGI or human level AI first, and use that as a tool to sort of achieve domination in sort of an malevolent way. So you could think of it as the bomb, right? It's just I, that's a good analogy, I think like, what would the world be like if the Soviets or the Nazis invented the Bomb first. I think it'd be radically different, radically different. I mean, it - In fact, you know, I think it's an underappreciated sort of element of American history is that basically that - try hard to think of another empire in history that would have invented something as dominance nuclear bomb, and not immediately have taken over the entire world. And so basically, who got the bomb first really mattered a lot for the unfolding of human history. And I think that's, that's the thing to worry about in AI, and, and you're not going to be able to stop. You might be able to stop American companies from building AI or American researchers, but you're not going to be able to do that, you know, stop Chinese companies or other kind of, you know, other countries and all over the world. ISIS, you know, assuming ISIS had good AI researchers, there's no way you're gonna stop them from, from developing, you know, a good AI, and so basically, yeah, that's my take on it. I'm not too I mean, there is something to worry about. But the thing to worry about is, is who invents AGI first, not that AGI itself is going to is going to cause a problem?
I mean, so, I mean, and I knew this, I think this is your opinion. So I, we've been talking like me, and you have been talking about artificial Intel. On the broad, or the general level, we've been talking about it for a while since I've met you. But since you said 14 months, you know, since Chachi, Beatty came out GPT 3.5, slash four, whatever, wherever we are, you know, MLMs, you know, have like, shown us these amazing powers of verbosity and all this stuff. Over the last 14 months, have you updated your projection? Because, you know, we initially talked for the first couple of months. You were pretty energized. You know, some of these arguments you had, you know, already, you know, told me, and there were people that were saying, I think there were people that were saying humanity has 12 months, right? We're two months past 12 now
That’s right. Yeah. I mean, I You probably remember, I was more skeptical about all those claims, even back then. I mean, yeah, I mean, to be honest, I was impressed by, by how good it was, like I wouldn't have predicted so I'm not gonna say wasn't advanced. I don't want to sound like I'm a, I'm putting a wet blanket on LLMs or ChaGPT. It was an amazing thing. I mean, we're using it in our company. And it's, it's amazing. And it's incredibly useful. And it's super important. But you can be super important, without actually ending humanity. And so, yeah, if my had my views changed? I'll be honest, I mean, even though at the time, I was one of those people who didn't think in 12 months were going out of business. I still am a little bit surprised that at how slow things have advanced since then. And I think, yeah, I don't know what to make of that. You know. So I think there's something inherent to like industrializing corporatizing something where you have all these great startups and invent something really innovative. And they never do again, they just spent all their time optimizing, growing, their original product, right. And so is that happening charged up to or is there something inherent to the actual technology that it's actually already plateaued?
Yeah, so one thing that I'll say and you know, you said your company, we're gonna we're gonna get to that, you know, listeners, but you said like in terms of, you know, you're talking about like what you anticipated. There are people that are saying LLMs are running into limits. Another thing that I have to say is I have read people who said- Computer scientists who studied LLMs, who work with LLMs - who said, Well, you know, like a year ago, I just didn't think it was possible that ChatGPT could do - that GPT could do what it's doing. These are the people who actually know the theory the best. And the fact that they made an incorrect prediction means that we just don't know what's really going on on some deep level. So for example, so for example, with Newtonian mechanics, just to make it clear what I'm getting out for people, like, we understand what if, like, you know, if acceleration is not constant in a vacuum, what the hell is going on, right, like Newtonian mechanics, like, we know, the model, we know that acceleration, you know, whatever is constant, you know, all of these things like, these are some parameters that are fixed, we have a good handle on it with some evolutionary stuff, I can like, talk to you and be like, Okay, what's going on here, you know, like, we have certain predictability and regularities. But my, my inference from the fact that experts quote unquote, in the field, did not anticipate what happened means that we don't really know deeply what's going on, which means it's going to be incredibly hard for us to predict whether it's going to increase linearly, exponentially plateau, whatever.
It’s true. It's a good point. And so I think we should also apply that to the - So your point is on sort of the pessimistic experts, we should also apply that to the optimistic experts, and definitely for the optimistic not neophytes. And so, so yeah, you're right. At some level, we have to have some humility, we don't know what's going on. I mean, to me, you know, a couple of things, make me think there will be a plateau. One is that things have come along every five or 10 years in AI that really have surprised a lot of people, including the experts and cause a lot of optimism and then plateaued. So there's, there's there's five or 10 plateaus like that in history of AI. We've talked about a few of them. So just from that sort of meta historical point of view, you know, there's some sort of reason to make that inference. The other is, yeah, things I said about the the the, you know the slowness of the progress since then. And I think if you feel - So I'm one of those people who are, you know, if you'd asked me. So if you'd asked me three years ago, two years ago, whether it would be this good as it is today, I would have, I would have been skeptical. So what was I wrong about there? I think there’s a couple things, I think a lot of what's going on it makes it look impressive, basically, is that it's regurgitating other people's thoughts. So there is a sense in which most of the useful thoughts and ideas and concepts and expositions you'd want to have have had been already sort of, you know, document on the Internet somewhere. And so it's really good at just assembling those things. It's doing a better job than I thought it could. But I can still give it some extremely rudimentary reasoning or planning problems, and it's horrible at them. So, so anyway, I think that, yeah, your point is well taken, we all need to have some humility. And I think, you know, if the only answer really is, if you really are worried about the future, then you want to be the one who's building it. Yeah. So like, like,
Yeah, I'm smirking a little bit because I'm just like thinking, I use it, I use chat. GPT, probably two out of three days. You know, a lot of times for like, basic stuff. Basically, I think the primary thing that ChatGPT has done is, you know, I use it, like it's a supplemental search, probably a compliment, probably equal now. And also, you know, there are certain things like, you know, give me a CSV table, I mean, something that I can do, or something that I can code with, like some SQL queries or whatever, SQL Python, whatever. But now, I'm just be like, Just do it. Like, assemble it, you know, okay, but sometimes ChatGPT just bullshits me about something that I know and I'm like, What the fuck? Like, what? What the fuck? You know? Like, come on, man. You know,
You get from your friends.
I'm like, You do not know what the fuck you are talking about it? You are trying to bullshit me right now. Are you serious? You know. I mean, again, like this is like the humanization aspect where it's like, I literally will blurt out like, What the fuck, like, are you trying to say here?
I mean, one way to think about that is, you know, what it's trying to do is impress you with how smart it is. And if it if it can do it by actually solving your problems precisely, or by actually someone else's solve your problem, and it's just regurgitating that solution. It'll do that. But the minute you put it in a situation where it has to actually innovate a little bit, then it's going to start sort of, you know, confabulating and making stuff up. Yeah.
All right. So your company, I will be talking about AI, the pros, the cons. You know, like you're the history of it, your history with it. Your pessimism, your optimism. You're obviously delusional, because you're a startup founder. We can we can say I like, you know, a couple of years ago. I don't remember it was, it was a couple of years ago. I feel like we've known each other longer. But you know, since the pandemic, I feel like a lot has happened. You know, I met you at a party, I think. I don't remember 2021. Yeah. Yeah, actually, no, I met you dinner. 2021? September, yes. Okay. And then I kind of forgot about you that I met you at a party. But anyway, keepin it real. Because you weren't the other end of the table. Remember that?
I do remember that. I’m surprised you remember the actual date. And that means you wrote it down in your journal, where you wrote a diary?
No, I just, I remember, I remember. But, uh, but any case, the diary is my brain. But um, so you know, you were saying, basically, you're, you're just like, don't do a startup, unless you really want to do it, it's kind of insane. And you have a startup. So you're insane. We've established that, but you're starting your startup is artificial intelligence, which like, that's better than all the non artificial intelligence ones. When it comes to funding and traction and visibility. Just going back and like going more into like business mode. My startup also has AI in it. And we actually had that before. But anyway, you know, I think I don't want I'm not gonna talk about like AI and biology and health. But I think, you know, we talked about ELIZA the therapist, those, I'm just gonna say, really quickly, there is already a lot of evidence that AI is going to transform simple diagnostics, okay, I just know this, because I've seen it happen. See that happen before my own very eyes, basically, general practitioners, nurse practitioners, they need to step up their game in terms of the value that they add and figure out what they can do, because a lot of the basic stuff is going to be taken over by AI, a you are you are going to have, you're going to query an AI when you're sick. For a lot of the basic things, you know,
Yeah. You could predict 80% of your interaction with your GP by looking at WebMD. Right?
Yeah, but I think the issue is people, and like, you know, 20 years ago, when WebMD first came out, people were like, oh, people aren't gonna go to the doctor, you know, the, the reality is, though, people want that human experience. And, frankly, the LLM is a spit back, you know, GPT. Speaking of spitting back out at you verbosity is actually what people want.
Yep. No, absolutely. And I mean, again, huge, hugely, I can imagine a transforming a lot of things. And that would be one of them. Absolutely.
Yeah. But, but you're, you're doing some something different. And I know a little bit about it. But like, just like, tell the audience, like, what's the thesis of your company? Where are you at? Where do you want to go? What are your ambitions? I mean, that's a lot there. But just, you know, start with a thesis, you know, give that I'm not saying elevator pitch, because like, we're not we're not we're not talking to VCs here. But you know, what I'm saying that people just don't understand where you're coming from what Dry.ai is,
Yeah, absolutely. I mean, we're interested in human progress and accelerating it. And, you know, if you look at, if you look at basically in in our thesis there is that, you know, the entire world runs on software right now, you know, you've heard the expression Software is eating the world. This conversation is totally mediated by software, you know, any podcast where people are complaining about technologies actually using technology to complain about technology. And you know, you want to go from A to B, if you want to drive your car that requires software, if you want to rent rent a room somewhere, everything entire society, from science, to business, to intellectual life, to your family, etc, depends at some level on software. And so the problem is that software is an input the entire economy, the need for it is growing exponentially, because the economy is growing exponentially. But the supply of programmers is not growing exponentially. And in fact, software is getting more and more expensive to produce. And so I claim that there's a huge amount of progress, that's not happening, because we don't have the software to create it. We don't have the, you know, because software so expensive to produce. So one analogy is with with energy prices, so you know, the price of gas, for example, let's say it's $3 a gallon. If the price of gas went to $30,000 a gallon, the economy would collapse, right? It's a price of gas and all the other electricity and everything else if it all went up 10,000 times, if the price of energy went up 10,000 fold, the entire economy would collapse because the entire economy runs on some form of energy, anything with automation, anything with electricity, that's energy. And, and when you make that 1000 times or 10,000 times more expensive, then basically it's either too expensive to do. People aren't going to invent invest in new technologies and new companies because basically, you know, they don't have the actual resources to pay for the energy, they're not gonna be profitable companies because their input, which is energy is so high. And so try to imagine a role again, where we have 1000 times 10,000 times, higher prices and energy and think of all the progress that's collapsing, I'm claiming that that change in levels of progress is what would happen, if we made software 1000 times 10,000 times cheaper to produce, basically, we have $30,000 A gallon software, we don't know all the software, we're all the progress we're missing out on because you've never seen $3 A gallon software. And so what our company is trying to do is make software, several orders of magnitude cheaper to produce. And so and so other people are trying to do that. And there's a lot of hype that sort of LLM 's are going to be able to do that not so far, you know, it hasn't happened. Lots of ways of approaching this. But we have a you know, we have a platform that helps people build today, certain classes of software 1000 times faster, literally, as this piece is released, or soon afterwards, we'll have it we'll have launched something at dry.ai - dry.ai. And that's our goal, basically, is to let people build, again, software they need much faster, and I and it's not just a matter of human progress, it's a matter of dysfunction in society that basically, a lot of everything people complain about social networks, they complain about search engines, and so forth, it's really hard to build your own, because it takes 1000s of programmers to do it, if you could actually just build your own software, you know, your social networking, your searching would be much better, your work would be much more efficient, more pleasant, you wouldn't have to grapple with sort of a Frankenstein of other software packages to get the simplest thing done. And that's in a nutshell, what we're working on. And so yeah, and so some early applications of that would be things like we've done for you, basically. So you know, you're using us for you, we created a RazibGPT, where we take your writing,
I'll put a link to that.
Yeah, we took your writing, and we created a ChatGPT out of it just for you, you could go there and say something like, you know, did the Basques have indo European genes, and we'll give you a very thoughtful answer using you know, your voice, and using just the data in your writing. You know, another thing we've done, I don’t know if you want to talk about this, but basically, we created an RSVP app for use in dinners you host sometimes, that has certain characteristics that are just, you know, you can put your dietary preferences in, it's anonymous, this or that?
Razib: No meta
No meta, exactly. You know, another thing we experimented with was creating sort of a Yelp, just for, just for people who like authentic ethnic restaurants, which a lot of your sort of readers do, because they're just into sort of understandings of their cultures and food from them. And so these are all things basically, you can do everything I've just mentioned, you could do it in an hour or a day on dry.ai - And that's, that's in a nutshell, what we're working on.
I mean, you know, what you're alluding to here is just like, there are things so for example, you know, to, to get a computer, a computer that you could use, you had to be a hobbyist that would assemble it together, and order the parts. And you know, eventually there were kits that people made. The kits simplified it and then so the kits would be like, kind of more like, you know, IKEAzation of home computers. And then finally, there's people got this idea of just like having a preassembled computer that you just get, you know, and you know, I mean, they're pre assembled, but they're not all the same. It's, you know, there was this idea that everyone would have the exact same computer actually, when we were young. But that's not true at all people like different colors. Yeah, but they actually like different different preferences, different parameterizations, basically, of the computer, which is the abstract class. So when it comes to software, a lot of times, you know, people, they have, like some tweaks that they want to make, but it's the same thing. The problem right now is with software you have to build. I mean, yes, like there are pre built modules, but even then, that requires some technical skill, to get the modules to work together. The debugging is a huge issue, et cetera, et cetera. So any customization is total pain in the butt, using just the exact same software package is easy. So you go from like, using the exact same software package, okay, that's totally user friendly. To okay, you need to rewrite the code. Totally not user friendly. So dry.ai would be in that spectrum, obviously, I think towards the user friendly end, but allows the individual to make some changes without having to go into the code.
Yeah, that's a great analogy. I mean, right now basically the computer analogy of building a house. That if you want to build a house from scratch Have, you had to get nails and, and wood and steel and etc, etc, and drywall and assemble everything piece by piece. And maybe in two ways of making that simpler. And that, you know, again, that takes a lot of knowledge and a lot of time and a lot of expense and a lot of labor. Two ways of making that simpler are so one is basically just buy entirely sort of prefabricated houses, problems, you can't customize that. And the other thing might be, well, let's take like the modular approach, which is what I just talked about buying your own house, that's like cloning someone else's open source package, or installing a template or something like that, that just something else someone has already built. The other thing you could do is build stuff from, you know, with these bigger modules, so rather than using wood and, and nails and drywall, you'd like just buy a wall, and you just assemble walls and floors together. And that would still take a lot of work like you alluded to, you sought to be quite quite expert take a lot of labor and expense and so forth for that. Ideally, what you'd like to do in dry.ai is say something so the what you do like to do for your house is like build build me a three bedroom, two bathroom ranch in sort of like a mid century style. And, you know, with a large, great room, open floor plan. And, and you just say that, and it creates for you
The command prompt the command prompt world,
Exactly, you'd like to Yeah, and you'd like to be able to like specify stuff at a very coarse level like that and have it audit the house automatically created for you. But then if it's not exactly what you wanted, then you could go in and customize it the way you want. And so that's kind of the middle ground we’re trying to achieve that basically very quick creation software customized you need, but also the ability to hyper customize it as much as you want.
I mean, okay, so, again, I don't want to show my company. But it's interesting that we're actually in parallel tracks here. You're focused on software. And we're focused mostly on genomics, like other life sciences, potentially. But you know, our dream right now, what, what people do to get the variation that, you know, geneticists and stuff use in diagnostics, and all these other things. They have these like labored pipelines, that they're kind of like handcrafted artisanal. And yes, they're very extensible, but people don't want to break it. All of these, there's just massive issues with the maintainability of the code, and whatnot. Okay, then there's, like, out of the box, turnkey solutions, but the problems with those for a lot of scientists like, okay, but then I'm, like, stuck with the presets. Okay. Okay, so what do we do? So, you know, what I told recently, I was at, I was in San Francisco at the jpm healthcare conference, you know, I was like talking to somebody about I was, like, look, the way I'm imagining is, you have a command prompt, you're like, I have a mushroom, and I want it at this quality threshold, and I need at least this many markers, assemble a workflow and run the alignment, and give me that variation. And then the command prompt, will give it into our, our, you know, artificial intelligence, and like, a symbol of an appropriate pipeline with the assumptional modules and stuff like that, that's all people want, they want some customizability. You know, or, you know, I mean, it's basically like, they want some freedom, right? And that will make - like, my argument is like, okay, that will result in more science, more productivity, blah, blah, blah, exact same thing. Like, basically, and you we have this in the notes here, you know, industrial revolution, you know, unlocked all this energy, the agricultural revolution unlocked all this energy, time is energy, time is money. This will unlock, you know, our lives in a lot of ways. Imagine, imagine not having to go to checkups. Imagine, you know, basically, doctors would become human doctors would be extremely value add professionals. So that means that there's a lot of doctors who are sitting on their butts, collecting rents, we're gonna be scared. But you know, if you got skills, that's going to be clearly obvious. You know, similarly, with software, obviously, some people are like, Oh, well, now my job like refactoring this or that is gone, you know, but the reality is, the software is a thing that we use, a tool, that we use, and you know, you don't have to be a specialist in the tool. I mean, you know, I have a friend who learns a new programming language every three or four months. He's just weird. Okay. I mean, that's just like, weird and like, it has like, no practical benefit, you know?
Well I mean, like, you're making a good point that, you know, historically, first of all, I mean, there's a lot of historical ignorance in these conversations, where people talk about as if automation is this new thing that's happening that's gonna change the world. When, you know, automation has been happening for 250 years, at least through you know, the Industrial Revolution. And what you learned there is that, you know, the unemployment rate hasn't gone up a lot. It's probably gone down since you know, 1870 1880 Whenever you want to go through a revolution, excuse me, 1770 something 80 Whatever you want to say the industrial revolution started. So and the other thing is You know, in some sense, automation makes workers more productive and makes them more valuable. And so, you know, a contractor with an electric drill, as opposed to sort of, can just do a lot more than a contractor with a manual drill. So it actually, and so therefore, they can generate a lot more value and makes their labor more valuable. And so, yeah, so it's not something I mean, obviously, there will be there's always in the near term, some some, some people are negatively affected. But overall, yeah, it's a great thing that's been happening for a long time. And yeah, I don't I don't worry about it. I mean, if there again, there might be problems to deal with. But I'm just saying that, you know, let's deal with them and and not sort of kill the future, basically.
I mean, yeah, the reality is, it's already happened to the software multiple times used to be you know, we don't we write code in high level languages.
Yeah, you're right, people will actually, when programming languages were first invented, people thought they were going to put programmers out of business. Because before that programming was either punching up little bits on punch cards, or adding assembler code or machine code. And they thought, literally, programming language was gonna put programmers out of business, but it was the opposite, of course.
Well, I mean, let me let me - For the listener, I mean, a lot of listeners who are Gen X might know this, if you're a nerd, but you know, Microsoft Word, which is, you know, for a while it was, you know, before Google Docs and other things, but then, you know, there was a period when it was Word everywhere. Word was basically a clone of something called WordPerfect. Now, WordPerfect, you know, was actually, arguably, the better product. And one of the reasons it was the better product is because it was written in assembly. But, so basically, it was closer to the machine code, it was closer to the source. So it was optimized and perfected. And, you know, it wasn't gonna, like, be buggy and crash on you and stuff like that. The problem, the problem with WordPerfect is, it was released at a much slower cycle than Microsoft Word, because Microsoft Word I think it was put out, put it put together, it wasn't it was like, a Visual Basic, or like, it was one of the Microsoft, you know, for software people out there, they have these, like, you know, I integrated development environments, you know, as you know, where they like, put together the modules. So they put together. Yeah, studio. Yeah. So they put the - yeah visual studio, they put together a very quickly. And so they kept releasing it really fast, really fast. And, you know, eventually WordPerfect kind of went out of business. I think it was acquired, I don't know if the brand is still around. But my point here is, yes, WordPerfect was great. But it was released so slowly, that it ended up being lapped by Microsoft Word because they kept releasing, iterating, and improving. Imagine a world like that for like all types of software, you know, where you don't have to, you know, learn how to write Python, or C or whatever, to release some great idea. And, you know, you know, in terms of productization like a lot of times, it's just the user experience, which is not like a code thing. It's just, it's an idea.
No, you're absolutely right, that basically, the complexity of software makes it really slow to actually iterate on it, you get less innovation that way, I mean, even a simple thing like for Facebook to go from, like liking posts to reacting to them with multiple sort of possible ways of reacting to them, that took months and tons of programmers to achieve for one simple little feature. And so, yeah, and going back to the the assembler thing, I think that's in your command line, way of talking about stuff. You know, today, if you think of what a programming language is, or what the way people have to, like, have to actually program today is they're translating the design in their head of what they want to build into something a computer can understand. And in some sense, what you were talking about for generating Yeah, basically, is the command line is a way of basically describing what people want to do human terms, and letting sort of your software translate that into the actual analysis. This is physical package can do. And so that's, that's something we're aiming for as well in our company as well. I think that's, that's one way if that's one advance that's going to make programming easier to building software easier is to rather than force people to translate stuff into computer terms to be able to just state stuff in their own terms.
Well, I guess as we’re closing out, You know we’ve been talking for awhile, it’s a great conversation. Um, you know, you're a trained computer scientists, cognitive scientists, artificial intelligence researcher, now you're an entrepreneur and you're in the startup space, you're trying to make things simple and productize some of your insights. Do you use your background as an artificial intelligence researcher every day? I'm just curious, do you think about that?
Oh, yeah, absolutely. I mean, basically, my research, the goal of my research was to understand, you know, what is the difference when humans and computers where humans can still learn stuff much more quickly than computers can right. So I wanted to be able to like have something learn. My master's degree is actually in child psychology because I wanted to
What? You just blew my mind, bro. I'm just freaking out right now. Y’all can’t see my face,
I was extremely earnest and serious. And they said, Look, I want to make computers able to learn, I should understand how humans learn. And you know, children are the best learners. And also, in addition, you know, in history of science, it's always best to start with, you know, it's often best to start with a simple system sort of in physics that was inclined plane or planetary motion. And you have Drosophila in biology has been a good model for things, for a lot of a lot of things at least. And so I figured what human cognitive, you know, child cognition is simpler than adult cognition. So a nice microcosm for studying cognition in general. And, and so and so what I did, as part of that was developed sort of a framework of reasoning, and learning and intelligence and planning, that tried to understand what are the core building blocks of all of human cognition, right, because human cognition, let's say, reached its peak in evolution, 50,000 100,000, whenever it was, right, it was, it was relatively recent in time. And so maybe there's been some evolution since then. But it can't have been a lot. So basically, what you and I are using right now to have this conversation, or what people when they're doing stock trading, or when they're talking about politics, or when they're planning, sort of, you know, organizing science, they must be using parts of their brain, they must be using brain mechanisms that evolved to do what people did 100,000 years ago, which was moving around, understanding the physical world, basic social interactions. So I developed this theory that sort of found a substrate of these core sort of primitives that could explain all of human cognition. And that's at the foundation of sort of what we're doing now my company as well. It's like, if you have these primitives, and you do a really good job of implementing them, then you can crank and build a whole panoply of things on top of that much more quickly than you can just using brute force data or brute force search or anything like that.
Well and so as you were, as you were talking, well, I mean, I did two things. One thing I actually did is I asked ChatGPT or DALL-E I guess, to create an image of a Neanderthal doing a mathematical proof, and it's not a very accurate looks more like a chimpanzee doing a mathematical proof. Anyway, that's neither here nor there. Sorry. It's these are the things that these are the this the creativity that AI is unleashing? Well, the second thing is I loaded dry.ai. It's like build your own AI powered everything app. Build a smart space, powerful enough to be your own intranet. As simple as ChatGPT, it's easy with no coding. So you know, you already talking about that. There's Learn More, and then like RazibGPT is on that page. So hopefully, but you know, when, when people are listening to podcasts, that'll be there. It's been great talking to you. I mean, you know, I've known bits and pieces of this for a while. But you kind of put it all together. I hope the listeners enjoyed drilling down on some of the details, hey, no one's forcing you to listen to this podcast. If you do not enjoy, you know, figuring out a little bit what back propagation just a little bit, what back propagation is, you know, maybe you should train yourself to listen to another podcast. I don't know, because I don't know where you think you are. You're gonna be hearing about stuff like this, right. But it was great talking to you, Nick, I will see you around town as per usual, but also, I'll see you on the internet. And I really wish you the best on everything. Obviously, we're kind of in parallel tracks in some ways. And so you know, what works for you will work for me, hopefully, and vice versa. And yeah, it was great talking to you. I hope you enjoyed it too man.
Yeah. Likewise, Razib, I didn't realize the parallel track, I didn't realize exactly the connection between you guys are doing in some sense, like a low/no code sort of genetics software platform. So that's, yeah. And we articulated sort of the benefits of that quite well. And I got - I’m glad we're working on that. And I enjoyed talking to you about it. Oh, Go ahead.
For sure. No, no, I mean, I've actually never I mean, you use the term low or no code all the time. I've never actually used that term in relation to GenRAIT. So I'm going to use that now. Because I think that the issue is like we don't work with devs we work with scientists and so somewhat different lexicon but you know, the same concept, same concepts,
And there's a class of investors who understand what you mean as soon as you say that
Inshallah, inshallah. Alright, talk to you later, man. Good talking to you take care.
Whole genome sequencing is used for adults and children every day to assess risk for 1000s of diseases. Orchid, a genetics company led by scientists from Stanford is able to do this for IVF embryos. Now instead of waiting for a diagnosis, parents can assess if their embryos have genetic variants known to cause severe conditions before their child's even born. No other tests can detect these issues so thoroughly or so early. So check them out at orchidhealth.com.