Hello, I'm Rob Hirschfeld, CEO and co founder of reckon and your host for the club 2030 podcast. In this episode we talk about vision one says synthesis between humanoid robots and LM AI. Fascinating potential, we really got excited about this idea that robots could be navigating spatial environments, learning from their environment, or at least interpreting their environment in very human ways and interacting with humans from simple instructions. This started from a small video that's in the shownotes, something that you might want to watch, all the way through, even before you hear us talk about it. If you don't want to watch the video, that's fine, you'll still get a lot from the conversation. Because we really analyze not just what's going on in the video, but why we think it's significant, and how it's going to have implications in how people interact with artificial intelligence in the near term. So fascinating discussion that I know you will enjoy.
I put the notice, I put the link in the chat. If you have not seen this video of fit your one launched by open AI together with figure when AI. Watch it now. Because it is, in my view, the bellwether for the next two years of technology. And the impact on CIOs is going to be enormous. For one reason, they have to get out of their headspace and start thinking in terms of synergies, combinations, and the use of technology for productivity. So I will leave you with these two words before you take a look at it. But I hope that you will think about a palletizing robot and an autonomous truck. So why isn't the palletizing robot or the pick pack robot on the back of the last piece of equipment loaded into the semi trailer that's autonomously driven. That's the combination. That's a combination that says I can pack it myself themself. And I can unload it wherever it needs to be. That means I can then take drivers of the trucks and the warehouse general labor and upskill them to be either running warehouses, laying out warehouses or management systems that do not necessarily require education in the traditional sense of curricula of universities, but leverage their innate and experiential levels of knowledge for good. This was a game changer shoot,
so if so, I've seen a little bit of this this figure pieces and literally it just started popping up on my feed so figure is just so everybody's aware. It's a humanoid form robot and they actually look like they've done a really good job of mimicking human form. So it's, it basically has the same proportions the same dexterity as a human worker. It can lift 40 pounds which is you know, a pretty good starting point for for operational but it has very fine motor dexterity on the on the hands and the grip.
And it has the capacity to intake human speech. Follow up commands, it learns it's it learns on the fly, it doesn't just use chat GGP but it uses various forms of AI Figure Figure one, or sorry, figure.ai is the robotic maker right there in partnership with open AI. combined. This thing called figure one, this is the one of the most advanced humanoid Cobots that there is and why it's such a game changer is this exemplifies everything. That was the precepts of industry for smart manufacturing and industry five over the last 20 years rolled into one and why it's such a game changer for CIOs, is you need to start thinking in combinations, not of tools, not of solving a single problem, but using technologies how they're Light being much more important to what, than what those technologies actually are? And yes, in the video, you'll see there's a bit of a communications lag. And yes, there's some unanswered questions like, when you see it, you'll see that there's plates. Well, how does the how does the figure one know that the plates are clean versus dirty. But the prompt at the beginning of the video basically says in the current scene, therefore, it's making assumptions that you have criteria behind the in the current seat. But this was, to me the thing that I have been waiting for. To start exemplifying where my brain goes, when I think about the fourth industrial revolution, advanced technologies, how things work together to solve a problem, ergo, autonomous vehicles, and Cobots. Change the world.
That's my little. The
thing that's that's really surprising here is you're talking about learned index, this is they're talking about learned operations. So it's not pre programmed, pick up the plate, what you're, what you're saying is it's not pre programmed, pick up the plates or things like that. It's using learned behavior to identify, you know, the AI is able to say, okay, I can identify what a plate is based on my visual field, and then it can it can figure out the motion by itself, rather than a motion.
Okay, yeah. And you'll notice also that even though the apple, it can't, it doesn't reach a cross in front, you know, like left to right, on an angle, it passes the apple from the left hand to the right hand to the individual. And I'm not sure if that's a safety issue that has been trained, you know, don't go within a certain distance to the human. Or if it's just something that has not been thought of yet.
Actually, actually take a look at it again, because it does something even better. It, it transfers, the apple from one one hand to the other. And then when the human holds his hand out, it drops. But the robot does not put it in his hand, it lit puts it about five 446 inches above his hand, and then releases the apple. And it it drops into his head. You
are correct. 100%. Correct. I've watched it probably 50 times. And each time I have another question that comes to my mind, like he says, Can I have something to eat? And the robot doesn't? It doesn't say what would you like or make a move to go in another direction as a human would it it takes in exactly what's in its LIDAR. And it's in its context. And so and then says finance, because the only thing I had available was an apple. Like, how do you get not? No, the plates were clean or dirty. But he put them in the dish drainer. All of these little things are clearly in part machine learning and learning on the fly. This is what's in my scene. This is all I need to know, I can't turn and go to a refrigerator to give you something else to eat. And with the garbage as well. Right? It didn't, it picks it up and it puts it somewhere because it knows it's been told in speech. It's garbage. So it doesn't leave it on the counter. It immediately empties it away.
There's another moment in here where it like gently bumped something and it actually has a very sort of a human reaction to the bumping. Yeah. Where he puts his hand down it wasn't what I expected that through goes up. Wow.
So So now. I will also read you something I wrote now a few years ago. But it says with its roots in the concept of smart technologies could be wedded to production systems. Industry. Four precepts were first introduced in 2011. It's a paradigm shift made possible by advances which constitute a reversal of conventional production process logic. Simply put, this means that industrial production machinery no longer simply processes a product, but that the product communicates with the machine eatery to tell it exactly what to do. I wrote that in 2017.
And I'm still digesting see realized this idea. So I mean, because I heard about like, Mercedes or they're ordering these robots to work on the factory line. Yeah, GM BMW. Okay, thank you. But yeah, I mean I having a human dexterous robot would be, you know, you could, there's a co bot, which I usually think of is, it's close enough to human that you have a human do the motion, and then it's it. That's trained it. But what you're describing here is, I need you to play I need you to weld this seemed please. Right. And it would have enough understanding to know what the welds are. And it could say what tolerance or what you know. It's just a whole different whole different concept from a manufacturing perspective for dealing with robots.
And it's, and it's not just robots, it's robots. It's co bugs. It's smarter equipment, like, why is an industrial robot only programmed to do one job, when the you could remove the arm or the piece at the end of the arm and have it do a variety of jobs. And those are the ones that are coming out. Now, ABB just bought a software company, Siemens is going down that road Caterpillar went down that road. Even Boston Dynamics was its new orbit capability, which can map missions for for spot to dog, and other things. All of these pieces of technology are now being integrated in such a way that the multifunctional closer to generalized AI is not five years away anymore. It's maybe two years away at the longest.
And I'm not sure if that's comforting.
Oh, well, I'm sorry. It gets me very excited. Because it isn't it isn't just about factories. It's about things like firefighting, life safety issues, window cleaning, anything that puts a human at risk, but also can elevate us to take on more creative tasks, more brain work, and less. But But this, this, this synthesis and synergy of these technologies is going to change the CIO role immensely, not just in manufacturing.
But I agree with retail. So I think I think this is the it worker in general. And, you know, to me, this is this is the point of these conversations. It's who you were already seeing with with MLMs, and the Lamborghinis and GPT is it's a question of understanding how to integrate your your work your work products with these other components. And so the challenge becomes, how do we figure out how do we improve the work that we're doing with the access to these technologies? And that doesn't, it is, that's the creativity that that we require some out of the box thinking this, letting go some assumptions? Well, it's,
it's going to be a blood bath of industry disruption, that's what it's going to be because there's going to be so much resistance to change, it's going to disrupt, it's going to disrupt the IT shop, like the assembly line disrupted the manufacturing sector, 100 years ago, it won't be the same, right? I mean, we're not taking it to people and putting them in the automation. And then in the process, they're going to be governing the process, right and managing the process. So from a from an organizational, on job role, design perspective, it'll be different, but in terms of the way that it's going to disrupt, it's going to be the same thing because, you know, so many people are going to be stuck with the old way of doing things and they won't be convinced until they see, you know, their company and other companies in the industry, disrupting them.
I'm gonna wear my skeptic hat here on this. And so, yes, there's potential for disruption. And it is possible that the technology's finally good enough to meet the promises. But we've gone through several cycles where err, technology has promised this kind of disruption on fizzled. I think the most recent one was no code, low code, before the where all the systems that promise to write your code for you. It's always fallen short. Again, like, I don't want to be a downer on this. But we've given our track record on these promises. I would rather be cautious. It doesn't mean that if the Gracia delivers, I wouldn't adopt it is more than I would rather take a wait and see approach and not put all my eggs into this basket.
But I'm not sure that I don't disagree with either statement that technology has promised and not always delivered. I have I do disagree with the fact that there will be there's already a rush to use generative AI. Right, every CEO, every CIO, it's on everybody's top of my list as number one. But it's without a purpose, a direction, go find the business problem you're going to solve with this technology? Or how is it going to make you know life better for your employees or benefit you on your suppliers, your customers, your bottom line or your top line? People are searching for those answers. This, on the other hand, says here's how. Here's the example. This is where the combination combinations of technology really come into play. Imagine if that was not a younger, if that was not a younger ish person interacting with the Kobach, but rather an elderly person who said I need you to do X. Yes. The idea my point clouds is simply that because Chachi cheapen GPT is an HMI. And it's the first one that we can actually really get our heads around and really understand how it can benefit us. It's the reason that the robot is not scary. It's the reason that the advanced capabilities in Cobots are commodity Oh, is not going to be as negatively disruptive as perhaps Thailand for tents.
So so you're, there's an there's an assumption here, and then there's, there's something I want to I want to explore after that here. So your contention is that part of the what's being demonstrated in the human interaction piece is some familiarity or an ease of integration that's been missing. From that. And so that's, that's part of the revolution here. Is that by having human interaction models that are that are comfortable here, right, then that makes the robots much more accessible to pizza. Okay. And the point that I had after that was, I wonder if there's a new role of programming that's fundamentally different than we think of programming today. Right? Because programming, you know, AI programming is not in itself, you know, it's a language, it's not a distinctive thing. But I don't think you know, they all these AI pieces, you're not going to be programming a robot with code in the same way but I think what we're describing here, the robot will need operational parameters and instructions that will be probably feel very code like and likely limited to specialists who understand how to speak that that parameterised language, yeah, initially
Yeah, but that's gonna it's gonna change to more natural language approach to defining you know, your your interaction with the machine very quickly. I think that's gonna happen in like a year or two years tops.
So like, I'm fine with this is like, my main concern with this is that natural language is terribly inaccurate. And we still have not solved the problem of turning natural language into accurate commands. Oh, yeah, yeah, that's I hear that chatter. tippity is just going to To like, it's not going to solve this, but instead is going to compound this. Like in let's say, 90% of the time, or let's say 95% of the time, because that does natural language, parsing accuracy. Now, if I personally don't have the time, it will work great. The other 5% of the time, it will make things worse. And that is a big risk. Like Take for example. Like, I mean, I'm going to take this to a ridiculous extreme. But let's say we built a HMI interface for piloting ships or airplanes that were this what if the computer misinterprets, the routing requests?
This goes to a bigger question which the word I use is lack of item potency, is it that with large language models, there's there's no repeatability. So we have to impose the same sort of controls and validation methods that we impose on human intelligence with with artificial intelligence, because right now, I'll give you an example. I was doing some statistical analysis yesterday using chit chat GPT version four. And I was I was doing I was putting prompts into 3.5 and putting prompts into four and 3.5, it would, it would look really realistic, right? But it'd be bullshit. Like, it would show the math and I'm like, Okay, well show your work. So it's got to show all the calculations and stuff. And then I can figure out where it screwed it up, shot GPT for was better. But I still would be in a situation where I would create a prompt, and then, you know, refine that prompt through multiple iterations, and then take the refined prompt, and enter it into GPT for five or N for multiple times, and I would get a different result. And sometimes those results were dramatically wrong. And, and that's that, it's like, well, it's how is it different than like, peer review and scientific articles? Right, one of the methods we have for establishing information generated by humans is like the peer review process. Or if we're measuring the accuracy, or validating the output of humans using, like calculators, like, you know, we've created Matlab to do, you know, statistical modeling, and so forth and so on, where we've actually got expert systems that can validate what huge parts of what the human calculation is, I mean, even Einstein had his wife check his work, right. So I think our model and thinking about like restful systems, and, you know, you know, API's and an item potency really just does not fit this paradigm very well at all.
It I agree, it doesn't fit it. But I think what you're describing is an emerging capability. And this is actually where I think I'd like to air quote programmer and it's because it's not prompt engineers either. Is it the UX for these systems is to Joanne's point is human speech, and, and not just human speech? It's actually body position, right? There's visual pieces. So interacting with that robot, it's you hold out your hand and it understands holding your hand as a gesture, which is mind blowing. But behind this behind this, we have to have some type of and this is really what programming is, is is technical constraints definition. And, and when I would go Hey,
isn't that isn't that in fact, what much of what copilot does? Yes, in a primitive form. And it's why I just put a an article in that was just dropped from I triple E, about prompt engineering. What I think this actually gets to is the fact that you are going to have specialist agents, you're going to have co pilots, you're going to have interlocutors who are exactly what you described, Rob, they're going to be the ones where temperature is turned way, way down. And they're going to have a serious amount of reparation. Basically guardrails on some of the things and directives. But there will be, there will be a kind of degrees of freedom that they have to work in within and
act. Sorry, which if you're finished, to add onto that, this is where this this is exactly what you just said, is my thinking around why small language models and specialized Yeah, are the way forward. And this is also representative in that same video because the neural networks that are behind both the robot and the open AI contribution to that relationship are well defined as small language models. And
this is this gets into the whole area of a agentic or multi eight, multiple agents, agents, which in fact, within the same kind of work group are actually supported by considerably different MLMs.
Different fine tunings so forth, this is where I think this is going your issue about the smaller language models or the constrained language models. I think of them as Slim's small language. These are exactly I mean,
okay. Star Wars had see Threepio. Right. Right. It was that was the robot that was the interlocutor. It was the it was the it was the social provided the social glue. This is kind of what we're talking about here. And what this article describes is work that's been done to compare human beings doing
prompt engineering versus assisted prompt engineering done by an agent itself. And and the value of the the resulting value of the of the output, both in terms of call it item potency, or consistency, determinism, but also kind of accuracy all of the all of the all of the metrics. I think it's a, I think it speaks extremely well. And when that also says is with the smaller language models, what are the architectures that support them? Does that actually have to be delivered as inference as a service, as we are now seeing it? This kind of gets to the topic of the day run, but the and that is, and that is there's
a topic, there's a topic of the day there is
and in this particular case, what we're thinking what I'm thinking about is architecturally placing small language models in in edge edge, architecture in Edge hardware, and by edge, I really do mean, mobile handhelds something much closer to the point of both data generation and requests where latency is a big issue. I'm very interested in that.
Can I feel son that the language model evolution is following the mobile hardware revolution, where you're having like, like chips on the system, for specific purposes like voice decoding, like video. Going back to the small language model thing And I mean, I would be remiss to to talk about this or not mentioned expert systems. Right. Which, which
failed miserably.
Yeah. And, and yeah, and to be fair did the exorcism failed, because it was difficult to encode the knowledge. And it was difficult to map that knowledge to a human interactable system if we can. But it does feel like the goal of these smaller nationals is the same as expert systems. In that it is very frustrating topic. And you want to prevent the language model from drifting away from that.
Absolutely. So here's the situation, in just a class is pointing. And if if we need to move, let's move, but did not this point. So here's a conversation that I had with a factory worker at an automotive plant on Monday. And it was recorded by Father by Phantom, so that I could be able to use it and throw it into GTP, or, you know, anything else that's currently available to me here, because some of them are not released here. The models, I wanted to put it into anthropic, but it wasn't here, we can't get access to it. I said, Take a give me a scenario, where you faced a problem recently, and how you solved it. And the dialogue was about he happens to work in a print shop. But they're not just printing. They're, they're, it's basically 3d, you know, like, a 3d printing house where they're making product, physical products from designs that people have created, where they want to use it as a minimum viable product or something else. So I said, relate the situation exactly what happened exactly when you found the problem with the design and exactly what you did to try and correct it. And he related the whole conversation, I then took that transcript. And I fed it into an LLM. And I said, break this out into a people process technology type scenario, task. Record, what would be that knowledge that needs to be acquired for an expert system, or a small language model? And give me the results. And in about, I would say six tries, I had the scenario laid down. This is the problem. This is when it was first noticed how it was first noticed and why it was first noticed. Visual was the issue because there was a 3d object that had been in you know, done by a 3d printer. And I also said, and what would have been other possible solutions, not just the one that this guy chose, and it gave me six different options, change the filament change the speed checks, a hot plate, you know, for where the temperature for temperature deviations and specifications and so on. To me, that's the definition of the agent that you're talking about are the interlocutor who had the context who had a problem had a solution and then could take that one scenario, and start building a library of scenarios to ask different people in different contexts to see what the answers would be. So why would that be? What was
the intelligence you use to trying to follow you? What was the intelligence that you use to lay out the strategy? Well, how did with what system did you set out the initial kind of request?
What the first one? Well, the whole thing was recorded by phantom AI, which did the summary which did all of the, you know, basics behind it? Then I took that and I put it into GTP. And I GPG. And I said, How would you how would I do this with Lang chain? And that was the tooling that I used. And I think that's where you're
at. Now was that was what I was asking things. Yeah. It
was like chat. And it like I said, it took several tries, but eventually it got enough that I felt confident that I could actually or one could actually use this not The best method for sure, I'm sure there are much more sophisticated tools, but that you could actually put that together because my big issue and the problem that I've been trying to wrap my brain around has been on the knowledge acquisition side for the small for the slim. And for how you would then take that slim, and put it with human interlocutors to say, is this correct? So that you could pare down as much as possible, the timeline that it would take to actually capture this and what excuse me, companies could use to build their slim in a way that's not so intrusive to the person who holds the knowledge.
And I know what we've been doing, to try and capture this material as we've been. It's the beauty is is very simple, right? You're having you're having meetings and discussions where you're talking about the, you know, what's going on, and what's happening, you're using the interactions that you've got to basically collect data, the difference, the difference between now and the URL expert systems is that now we can use LM to scan that information without it having to be in a expert system format. Yeah. And so we can basically vectorize that data and turn it into that knowledge. I guess, I guess, I still believe that, at the end of the day, we're going to get that all that data. And then there's going to be some programmatic. And I'm using programmatic in sort of a general way. Programmatic boundaries, conditions, how to use this information that's going to feed those models, still gonna need to be a framework around the knowledge base that we've built. And I think that there's still room for a technique, a technical role, it's just different than programming would be where, where your, your understanding, like, Oh, here's a model that has these limitations as these capabilities, here's a person that you know, is going to interact with it in some way, or it's a drone that's flying into space, or a robot that's interacting, and, and there will be some type of language that has yet to be evolved, or, you know, I'm not as I'm not as familiar in the, in the pieces at the moment, that where somebody is going to have to be able to understand how to just like we translate between silicon and binary into human interface, there's going to be a, you know, human interface engineer, you know, the robot programmer, or an AI programmer, it's going to be a weird
why I'm thinking of it as it's, it's just going to be another layer of abstraction. When you think about software programming, starting with, you know, you know, binary, you know, machine code, and actual real bugs, beetles, too, you know, then we had object oriented, right, where we created more modularity. And then you know, and then more much more recently, we've got frameworks like view or React or Angular. Similarly, on the infrastructure side, we've seen a similar sort of abstraction, you know, kind of culminating and what rackin is doing, right, you know, so, so thinking about, like, imagine Jade, like a test framework, or react or whatever. And you say, Hey, give, I'm gonna build an app that does this, this, this, and this, why don't you deploy out this framework with all the associated libraries that I want? And then we start, right, so you've created this whole nother layer of abstraction, and getting things started and taking advantage of the hierarchical levels of modularity across the whole stack. That's kind of how I'm seeing it.
And it's iterative, young by a team. And like, any other iterations with a team, you are going to go through cycles, you're going to look for improvements, the various participants are going to identify or address the issues. And so yeah, right. Thank you. I think you're on to it.
Is there a need for a declarative component with
this? Yes, declarative? Yes.
I know or maybe not necessarily that declarative but but I think Collier was he has said that like it important, reproducible part of what makes declarative appealing and the and it word is steady, write it down on any and then use that. To make things reproducible. You need to have the ability for for for doing reproducibility first though, before you can do it the clarity.
Yeah, I think so will close is the right one,
it's yeah, I didn't say the same thing. It's, it's, it's like the, it's like, well treat the AI like a human right, you know, you're gonna write some test, and then you're gonna dump it in a unit test and system test and go through the whole flow. And there, there is a mechanism for taking the output of intelligence and creating a declarative model, which is the actual code that I wrote, that then gets validated and verified through both human and non human means we've got our standard tests regressions, but then we also have things like, code reviews, right? So we're still going to have to do code reviews for intelligence. It's just in this case, it's not art, it's not natural, it's artificial.
It's funny that you said like, treated as a human, because that's where I believe that exactly the problem is with ll apps, like with a human, you start teaching them the basics, and then build on top of that, with LMS with throwing all information in there, and then totally telling it, okay, give me something for this. And if it's not, right, all right, change it, so it fits my view of it. So we have, we started with a haystack and then trying to build a needle. Right now,
this is why it is important. We're thinking
of MLMs as a monolithic model, right? You know, we've got a neural network with 1.7 5 trillion parameters, and we're going to input it but well, that's not how human intelligence works are 17 different structures than the human brain? And yes, 70% of that is the frontal cortex, the higher language, but you've got other specialized Oregon's with the brain. So I was thinking about this yesterday playing with with Chet GPT, when I was using the text prompts to describe what I wanted, and it would call dolly and create images for me. And I'm thinking, well, this is kind of like the brain where the frontal cortex is doing, you know, the high level reasoning, and then doing things like image recognition, and things like coordination, where you've got lower level neural networks inside the human brain that are handling like motor coordination. And other things. I also think about that when I'm playing my guitar, and I'm, you know, grooving a finger picking pattern, right? It's like, well, that's, I need to get to the point where my frontal cortex is not thinking about the finger, picking proud pattern, it's a cerebellum, right. And that's also what got me thinking, it's like, holy shit. GPT, four has 1.7 4 trillion parameters. And the human brain has 100 trillion connections with 63 billion neurons, or whatever it is. And, you know, at the frontal cortex is 70%, that means the other 16 brain structures are only 30%. So
we're getting to the library's very shortly, LLS
will have the same number of parameters as connections, and every structure the human brain, except for the higher reasoning part. And that'll happen in the next 18 months. And we're going to, we haven't figured out the architecture of all of these different LLM 's and how they're going to interact and all of that structure that will come and I'm not smart enough to figure out what that's going to look like today. But we know that it's going to evolve in that way, where you'll have, you know, you know, just the language systems, the visual processing, the module coordination, you know, all of the different components, right.
And, by the way, the nature of those neurons, as you as you know, varies incredibly, I mean, the you get into some of these neurotic structures that are the bear no resemblance to what you'll find in other parts of the brain. They There isn't. There's not just one one kind of gate. There's not one kind of unit.
We're still doing matrix multiplication now. Well,
this is interesting, then. And I'm glad you brought that up. Because one of the things that we were talking about earlier was the Slim's the notion of a, of a small language model. I'm, I've gone down the rabbit hole on this recent research that dropped out of Microsoft Research and China doing turnery representation rather than floating point. And, you know, they tried to quantize quantize quantize huge step was what they're calling one bit LLM. And in point of fact, they're not one bit, but they they've played with them. And some right person came up with the notion, wait a minute, the advantage of one bit MLMs is that I basically eliminate the need for multiplication. It's all matrix. All the matrix operations are addition.
Hmm, interesting. So
what the smart person did or people did were said, All right, if it's binary, if it's just zero, and one, I'm not getting very good results. What if we did ternary? One, zero and minus one, you still get own, you still require only additions. The performance? Well, the scale this the power laws for the what they're getting, compared on the same LLM to llama to 3 billion, 7 billion parameter. LLM is about all told, probably about an order of magnitude. Right now, and both generating embeddings delivering inference, memory footprint is significantly smaller. And the bonus, the frosting on the cake is this thing, run these things? Apparently, I haven't seen it and in reality, run almost as well on CPU architectures as they do on GPUs. And you've basically eliminated all of them. Matrix multiplication. It's matrix addition. Tyler, it. Yeah. I mean, it's mind boggling.
I mind boggling. I'd have to do some research. I mean, when you're talking, the first thing I was thinking is like, well, how does that work? So if we have a binary ALU, versus a ternary, ALU, and the processor, and what the sight the clock cycle costs for doing a multiplication or an addition? Right, I'm thinking back to my assembly line assembly coding days, and thinking well, Could it really be maybe but it seems to me unlikely that taking a ternary mathematical operation that you then have to convert into a binary operation so a standard ALU on a general purpose CPU would be able to process it banks
are total it's no i Your go up a level. What they're basically saying is that the LLM is right now have as long as they're using some some floating point representation in the in the in the four by four matrices that for the luxury
relations are expensive as fuck to begin with. So exactly. So once you do your math, right, but it's not your math is flooding point. Nevermind. This
is matrix addition, as opposed to, you know, matrix multiplication.
Yeah. Okay. Right. Preparation
for Kuantan.
That's a good question. And I honestly don't know I will, but I will wait for you to like I will drop the paper and or reference to the papers. Yeah,
if you wouldn't mind, I would appreciate that because I was just working with something in quantum that sounds like there may be a future that direction coming out of Big Blue. Join.
I think they're absolutely well be I think you're you're definitely on to something. I guess the way I would say it though, is that It's not something that's in preparation for quantum because that implies intentionality. But rather that things are evolving to a state that is going to be very complementary to what work quantum quantum is headed. So I think there's definitely a convergence there. Well,
I think, yeah, what I what I'm reading and what I'm hearing out of blue, it and some of the projects in quantum is that our days of floating point are long gone.
This is the way everything will go forward. And it makes perfect sense to me. I can't put my finger on exactly why other than cost, calculation costs, you know, like any, any, any list of costs, under the nomenclature cost that you want to give. But it's where I believe we're probably already there, but not in a commercial sense and experimental sense. Absolutely.
Yeah, you're right. It's I mean, it's it comes down to like, computations per watt of energy. And that has to take into account the economies of scale of production of the devices that you're going to that you're going to run the calculations on, quantum maybe orders, many orders of magnitude more efficient. But until you have scale economies in terms of production of quantum systems, it's still going to be kind of the traditional Silicon. Again,
what I think the impact of this is, is not on the foundational models that the the the giant LLM is the impact here in my mind is on slims. Absolutely. And this is an, what I don't know is whether there will be requirements for specialized hardware, to run it on CPUs, or do it well on GPUs. If I had any guesses to make, it would be the addition in silicon of a different kind of memory management. But that notwithstanding, it doesn't require five nanometer, three nanometers. It's not CPU architectures that a lot of found a lot of chip manufacturers know how to work with.
Check out the paper.
I would not be surprised if eventually once it's commercialized, we'll start we start with introduction of quantum coprocessors. Like like we did with throating coprocessors bike and you're
absolutely right. I think that'd
be very interesting. All right, we are out of time. So next week, I am on the road. And I had hoped to be able to open this this bridge up but I don't think I'm gonna have time. So I'm going to cancel this for next week. Sorry, all.
Cancel culture.
So what are you? What are you doing next week, Rob?
I'm visiting some customers in New York. I'm in Nashville for the first part at a bank automation Summit. Just not as exciting to me as I thought it would be. And then yeah, I thought about York City.
I actually I saw that on LinkedIn. And I thought about heading up to Nashville. It's only four hours for me. But that my ticket would have been 1500 on like, really? Do I want to go for that beat down and pay 1500
bucks? Yeah, well, you could have come we have some discounts or complimentary tickets but you should
Well hey, if you can hook me up I may drive up for a day but I
think we're outside the window for it. Yeah, unfortunately. Yeah. But but we did have some New York visiting customers just we had we had a we originally there was a event and that got moved out to June and then we're but now it's now it's back. It's back on so and well. My day is getting filled up which is good. I like when I go to New York and my days get filled up but rich I'm not coming out to see you anytime soon. Unfortunately, the valley just hasn't had the siren call that it that I'm used to
that. Okay, I'll make my way cuz I'm actually making my way out to New York in June. In probably Chicago in May, I'm, I'm actually going to try to do a little bit more traveling. And I probably am. And I have reason to go down to the DC area where in which case, I will make sure to let you know, come on on this one bit LLM stuff. Let me suggest you read the paper. It's not very long. And it makes some pretty outrageously great claims. Now I'm waiting for independent verification on some of it. But let me know if you're interested because I've, I've really have done a deep dive on some of this. And I'm starting to sound like, like, a guy on this preacher on the street corner.
So I'm, I'm also I'm going to upload a file is actually a paper on written by for some universities, such as Chicago data science folks around their thoughts on how MLMs can are going to disrupt data management.
I'd be really interested in seeing that. Yeah,
it's, well, it's not it's still you know, they've, they've got a long way to go in terms of thinking about that. I'm actually I'm, I'm partnering with data science professor at SMU to write something around identity based data access. So that'll be coming out. I'll let you guys know when I get get that written.
Well, one of the things I love about the club 23 discussions is that even when we have an exciting topic planned, and that day, we plan to try and go back to our AI operations, discussion, something near and dear to my heart. It's okay for us to have a exciting, interesting topic, and apply ourselves to decomposing the impacts. Clock. 2030 is always about the impacts of that technology and how it's going to be programmed, controlled, used, and disruptive. Something that's a lot of fun to me, when we talk about it like this. If you're listening to this point, it's probably interesting for you to I would encourage you to join us Thursday mornings at the 23 dot cloud, you can see our schedule our topics, bring in ideas of be part of our book club, there is so much fun to do and discuss in this group. I hope you'll choose to be part of it too. Thanks. Thank you for listening to the cloud 2030 podcast. It is sponsored by rockin where we are really working to build a community of people who are using and thinking about infrastructure differently. Because that's what rec end does. We write software that helps put operators back in control of distributed infrastructure, really thinking about how things should be run, and building software that makes that possible. If this is interesting to you, please try out the software. We would love to get your opinion and hear how you think this could transform infrastructure more broadly, or just keep enjoying the podcast and coming to the discussions and you know, laying out your thoughts and how you see the future unfolding. All part of building a better infrastructure operations community. Thank you