Rob, Hello, I'm Rob Hirschfeld, CEO and co founder of RackN and your host for the cloud 2030 podcast. In this episode, we dive deeply into AI and manufacturing and how to improve manufacturing outcomes by better analyzing data. If you are interested in manufacturing, you're interested in advanced applications of AI and digital twins, which is where we create accurate representations of physical items. This episode will hit all of your favorite topics. I know you'll enjoy it.
We're making a small adjustment to which is really useful to acknowledge that the node node ready component we can standardize. But yeah, the layer above that, like immediately above it, including, like configuring the saw, adding software onto the nodes, is, is, is totally bespoke. And we actually create resistance if we, if we go higher in the stack, in our in our customer like in our customers environments, the what they need, a ready a node, a node, ready state, and then they don't want any, they want to take that, and they want to do custom stuff on top of it every time. And so even when, whenever we push even a little bit higher into the well, we could, you know, whatever, install whatever, it derails the whole motion for us. So part of what we're doing is we're going to help people do that. We're not going to product us that they don't, they don't want to. So we're going to, that's a it's like a pretty small adjustment. It's really just seeding some ground. We weren't winning. But there's a messaging piece. There's actually a we're pulling Greg into doing that more like that needs a person, a strategic person, to help customers do that work in a methodical way. So it's, it should be exciting. I mean, we're on the cusp of doubling sales. I just can't tell. I just, I just can't tell when drives everybody nuts. I'm like, we have a pipeline to double our sales this year, but I would be very grateful when they close. Sorry.
I said, be very grateful for it, because there are many who have no pipeline. Yeah, we're thankful for that.
And how are you, sir, speaking to me, I'm doing okay. It's wild and wacky, but I'm doing well lots of, lots of things, lots of things happening. I've been two days at the AI engineers, rural. World's Fair. They're calling it go. I think that may be a little exaggeration. It's been interesting. The worst aspect of it is they're just so many little, you know, kind of two, three person startups that are, you know, in there getting their 15 minutes of presentation. And the result is that some of the more established companies aren't getting you don't get enough of a chance to hear what they have to say. But it's been a good place to kind of buttonhole people and kind of ask them the hard questions, watch them squirm a little bit, those that need to be squirmy, and yeah, that's been good. It's good. And then, yeah, I've been getting a lot of interesting good news on a couple of things. The the the data set, the distributed data set, Version Control platform is gaining some interest in a couple of good places. So that's been nice. Yeah, just generally, it's all good.
Well, I have been deep diving into physics. Ai, yeah. Physics, AI,
yes, anomaly detection on a shop floor. So here's the issue, if you, if you take, if you look at OEE, part of it is efficiency and part of it is effectiveness, right? You can make a shop floor as efficient as possible, but you can make that factory even more effective by by looking at where that cause issues actually happening, because people make mistakes. Suppliers ship the wrong raw materials. Digital data may be dirty. You could have been hacked. There's like, buckets, right? So when you go and look at root cause, you can't necessarily just point at one piece of a machine. It could be process upstream to that machine, actually getting involved that's actually screwing up your process. Yeah, right. I mean a prior step in a different machine or a different Pro. Okay, so all of this ends up becoming a kluge of SPs, statistical process and AI and so part of the issue is figuring out which bucket first do you look at to get to root cause fastest? And the second part of the problem is, is it physics based, or is it SPS? Physics based, AI or SPS, or is it a combination of the two? But ultimately, you want to know which step of a process, which machine in that step, after you've taken away the people and and technology parts being wrong, and also taken away the idea of the raw material is, is actually quality assured, right? So your raw material, if so, if you had, let's say, nine CNC machines in a group, you want to know who on which shift, which machine, why did it stop? What's stopping the factory line from running all of those? Whys so if you start saying, why did the equipment, break and break it down. You get to this kluge of SPs and AI. So take that core and then extrapolate contextually around it to be able to give someone an answer or an insight to what is the root cause. How fast can it be fixed and how little downtime? Or predict a no downtime event, meaning you have no downtime, which normally happens 11 times a month in a factory, and at a cost of between 39,000 and 2 million per event per hour. So that's a big cash savings. And so the more you can forecast it, as opposed to predictive maintenance or predictive asset management, the more you can actually go, Okay, it's more process than it is people. It's more process than it is equipment. It's more raw materials in process that actually causes these events to happen. If I can reduce the frequency and the duration I am laughing all the way to the bank,
yeah, that makes sense. So is there when you So, you're, you're addressing the the issue of physics, AI, what you're looking for is having eliminated human process, supply chain issues, whatever, yeah, we're talking about. So it's, you know, there's a, there's a, I'll call it mechanical, but it's a, it's a physical, physical aspect, either the materials that are being used, or something in the Okay. And so then I'm, can I ask a question about the what, what approach is used for anomaly detection. I mean, you've got, you've got, quote, a digital twin, perhaps, or you've got a lot of sensors. Is it? Are there particular ways in which you address anomaly detection? It because it couldn't be just parametric, you know, parametric, you know, right? It hits a certain level. You know that path. Us as a trigger point, send up a red flag, because that that just ain't going to work. So no, do you actually end up using
variance analysis, linear regression,
so that I can tell
what happened two steps previously, upstream that would influence the event actually happening downstream, yeah, so think about the parametric gives you a tolerance, right? Or a recipe would give you the tolerance. But there's variations that happen before you hit a level of tolerance that have tremendous impact on what happens after the fact. So the
linear regression, non, yeah, you want to look at a lot of non parametric anomaly detection, which also data which gets back to basically data provenance, data lineage, you know, so forth. That's There you go, relying on that's why you know to where I've been living. And yes, the the issue,
my issue to resolve, yeah, is contextualizing all of that core with mes, ERP, supply chain management, HRAS, all of the back office systems of the factory so that I have a core of my anomaly detection. But I can say, in a in a generative AI way, Hey, why did this happen? And no, get an answer back that says, Well, you know, this process step at this point on this machine is directly responsible for the variance which caused the downstream impact of stopping the law. But
then you're looking at, okay, what caused that? So you have to go, you have to move back from that point,
right, right. So behind the scenes, I'm doing all the backwards linear regression, but to the user, whether it's an OP, you know, a shop floor worker or an operator or a process engineer or controls engineer, or even an executive, I can say, Yo, you have three people on shift, one on day three, who don't respond quickly enough to an alert to prevent this from happening, and this is now costing you X hundreds of 1000s of dollars per hour. And how do I do this in a way that not only communicates the message using pattern recognition from the way they do either visual inspection as well, right? What is that operator actually doing when they get that alert that they're not responding to? It? Is it a mission critical process step, or is it just I'm lazy, and I don't want to walk over to the machine or like, what's responsible for that as well? So I'm holistically looking, and I need this from every cell on every line in every facility, and I have 30 facilities, and I want to be able to pinpoint exactly what's going wrong where, because it
can rapidly. How rapidly do you need this milliseconds? Milliseconds? You need millisecond analysis that there is an anomaly or or root cause analysis. I need
millisecond that there's an anomaly, I need sub minute, so near real time for the for the root cause, and or as close to real time as I can get it. That would be one. Yeah, the other way to do it. And I'm trying to figure this one out so the engineers of the group, please, please contribute your brain power. How do I do this in a streaming data sort of way where I can visually represent to a line or operator? Here is in the digital twin. This just screwed up. Go do something about this
topic of the day. Yep, I know right on topic. I know what. Make it easy. Must
it be digital? I'm sorry. Must it be visual?
It could be multimodal, but I'm trying to do it visual only for the reason that I have two other challenges I'm trying to resolve with the same endpoint solution, and that is, I have a way to extract knowledge from the frontline worker who may be very shortly walking. Out the door because of either riff or retirement. I have a way to add contextualization or semantic layers to the digital twin so that it becomes my semi back plane, if you will, to all my back office and integration products. Is
this a normal process for digital twins? Like, I mean, this is, this is where we're getting into how to make, fundamentally, what the question is for the day is, how do you make digital twins more AI savvy or more AI ready, and that's part of part of what you're laying out are these problem statements that if you applied to digital twins and you applied to inputs, you could you should be able to get better outcomes, right? But, but how to get that outcome is is not as clear to me.
Well, how to get the outcome. I mean, you can go to an open source tool, or you can go to Nvidia and use the composer. You can create your representation, representation, photorealistic representation of the shop floor or the line, or whatever. And you can start feeding commands into it. That's one way. Or you can take it down a pipeline and be feeding into it and have it build on command. The problem with digital twins is putting them together because you can't. It's very hard to do one giant process end to end. You can't do that.
That was kind of why I was asking the question, the first order of business is, is putting it together, the visualization and how you how you present it. You've got a variety of options there, but the issue that you, you, you made a comment of it being on a streaming or like a message streaming,
if you were going to do that, I already have my protocols. Well, basically, what you're going to end up doing is turning your message bus into a data store, a bit data bus,
message bus into a data store.
Yeah, because basically what you're what you're talking about is, you know, and then the question is, how do you reduce the amount of excess data getting poured into the message bus, into the stream? Because there's, here's your issue. I mean, you've got all these different systems, all the different equipment, they've got a variety, and you're certainly not going to consolidate it all in one place that just it ain't going to happen. So it's got to be distributed from the outset. That's kind of why you're using a message bus, or, you know, a message stream.
So it becomes, it becomes an issue of what you what you establish at each of the points of, I'll call it data, data capture, log information, whatever, and then how you, how you reduce the amount of overhead in sharing the appropriate access to that data, making it making it available to, you know, to the the totality of the digital twin representation, and that means that you've got to add an enormous amount of real time or policies that will act in real Time. And so what that says to me is you've got to apply policies, literally at the exposed interfaces, the API. So all of those systems you're you've got feeding into this model. It's not the protocols, per se, it's, it's, it's actually the selection of it's the policies and the selection of what gets shared and with whom.
Okay, so role based number one for security, as well as as which data stream protocols rather than AP. Eyes, because I can use MQTT type brokers and whatever, wherever possible, however, however, just wait. I can do message in transformation in flight. So I can, so I'm looking at, could I? Could I technically, and this is part of what the discussion around digital twins, as to whether or not you can actually parse the packet coming through the broker to drop the noise, prioritize and deliver as quickly as possible, so does the digital twin pardon
on what base on what basis do you prioritize at the source? Well, anomaly
comes before
regularity.
Anomaly out of tolerance. Where
does in your model, where does anomaly get identified as anomaly at the as close to the source as possible, or that's a source downstream. That's a source.
Wow,
my models that are applied at the source. So you, where do you So, do you have a model that's being in the systems or right next to the systems that are being done? They're, I guess, I guess one of the thing I was, does that involve a digital like, Are you training on a digital twin? Are you because the thing that I was expecting you to go towards was that we would basically vectorize the inputs of digital twins, or, I guess maybe vectorize the output, because I can't get out of my head and tell me if I'm you know, where I am on this a couple of weeks ago, a couple of months ago, we looked at those robots that were integrating with chat GPD, and they're using AI to train robots, right? And and at the SW two Summit, somebody made a comment about vectorizing the robots inputs and outputs as the AI training thing. So they're literally just just vectorizing the AI digital twin. But they're not actually building a whole model. They don't have to build a model. They just, they just say, here are the joint positions, and here's what I, you know. And so they're, they're literally training it more generally than that. Is what I, what I was understanding. And so I can't get out of my head this idea that we're, you know, basically building the twin, using the joint positions, training the robot, and then letting, letting AI figure it out.
But it's okay in that, in that case, you're, you're really not getting the kind of distribution that that Joanne is talking about. The other aspect here is pure vectorization. Ain't gonna, ain't gonna cut it.
This is, this is where my head exploded. Because I agree with you, it seems, it seems like there's physics and laws. But then again, I would have told you that, you know, in understanding text by just throwing text into system wouldn't work either. So,
and then I want to get back to,
and then where the model goes. I would get back to, still trying to understand, yeah, if, if you're going to
vectorize, if you're going to vectorize things in order to, you know, make it kind of usefully clumped, and use proximity, similarity types of measures you want to add, things like re ranking. If you use pure llms to just take the top five, you know the k5 k5 for K 10, that's probably not good enough, and what you need is a local call it an entity model, an entity graph, whatever you'd like that actually has the vectors, the vectorization going on locally, and that, in turn, makes the selection of what is actually, what kind of information is moving kind of in, down the into the pipeline, if you want to think of it that way, that means that you you're looking for slims. You're looking for small language models, limited stuff that can can coexist pretty close to the source of of the the flood of data. And what it says is, I'm making a i. A local a local model, if you want to think of it that way, it's a local model that assists me in finding and and selecting the right data to to
to submit to uh, and that makes a lot for sense to me, because the idea of just letting a AI invent how to move an arm, robot arm, and all the joints and things like that does not strike me as something you would leave to interpretation. It have. It would have to be pretty rigorously tested and planned and put through as cases, probably on a digital twin, or maybe you would have it run the digital twin design a plan and then make before knowing it didn't go out of band, out of band, bounds or break the robot. You might then execute the plan. Not that it shouldn't take long to do that, but that's, that's where I'm actually imagining this going,
Yeah, actually, here's a question, and I don't know the answer. I should, but I don't, is there a reasonably lightweight, distributed graph database, or graph network? Yeah, knowledge graph database technology that you could place close to these major systems that you're you're looking at that are feeding into the bus, if you can find something like that in associate, in conjunction with vectorization, also locally, using an acceptable and acceptable vectorization approach model, locally putting that on and making that the selection of what's going on your stream, on your bus, under conditions of anomaly, that probably gets you faster to a kind of a heterogeneous environment of shop floor. And it's also the kind of place where, if you kind of sent, sent it out to the pros of the pros from Dover, if you had a physics issue, it's a multi it becomes multi agent, but not locally. It's multi agent at some consolidation point
that make any sense, I would Yeah, it does. And, and to Rob's point about the digital twin, the twin is only as good as the data behind it, right and the rules behind it like I didn't quite grasp what you were talking about with the robot. I can see it, but my ultimate goal with what I was speaking of is I can take that data that I'm collecting and using for anomaly and and use it to train a model to teach the machine. So instead of the machine teaching me, I'm teaching the machine to take care of the show. I'm
sorry, the machine you're talking about is the it's the shop floor, piece of whatever
the equipment is on the shop floor, but also the process. Because the processes can't be so rigid and rule based that they can't this is why. This is why I would
say I'm even more, more, more in favor of putting a some sort of a dent works, a relationship, an entity and and Relationship data store nearby that actually does get trained, get learns. It's a, yeah, it's, it's basically, it's a learning machine. How do I how do I react to better under these conditions, these conditions, even if they're really rare, if they happen, there's a memory issue I create, you know, relation I invent. Or if I use new relationships between nodes on the graph, I put the vectorization in, you know, at those nodes, that's how I do it. And, yeah, I, I think it's a, it's a hybrid. It's a small hybrid model reasonably close to the major piece parts that you're you're part of your estate, your manufacturing estate,
right? And so And Rob to your point about the twin and where the twin goes. Those if you can leverage this data to make the machine teach or I teach the machine, the machine does not teach me, right? But make that a learning system, then you can build slims off of it that are industry specific or even process specific, and gradually, over time, take those models and reapply them back to the digital twin. The value of the twin is the not just the visual representation or the 3d renderings of the equipment. It's the parametrics, the SPS, the all of the data that's behind that and how it can be used to inform upstream process downstream outcome and be shared across the ecosystem, both with your supply chain and your customer base, Right? Because I can now share this information with a supplier. Let's say or grade the supplier on the raw material, on the quality of the raw materials. If I share it with the supplier, to say, this is where your piece part, your widget, breaks. You change the design of your widget. You didn't inform me. Now I have all this waste or rework or downtime so you lose the next contract or or whatever, whatever. Yeah, right, whatever, the physical
part of, part of the issue of the retention of this, you know, the data lineage is one thing. The data provenance is what you use to step to establish accountability or responsibility that outside of the actual process. And that's, that's, that's vital to you.
Yeah, and and, and here's the flip side of that, not only are you able to feed that information back to your suppliers, or have them feed you the raw data for every run that they make of widgets, so that you know how much is quality and how much is not right, because the raw materials come out of your inventory onto the shop floor. But think about the opposite effect more and more. Now, manufacturers don't make products. They sell experience. They make the product, but they sell the experience
of the product.
Caterpillar doesn't make a tractor. It makes information management system for Agribusiness. It's the experience of the agribusiness, right? Because you've got weather data and commodity information and all that other kind of stuff. So if you have this stream of data and you're representing it visually in 3d as a digital twin, you also offer your customer the opportunity to make light of that services component of what you're manufacturing as their quality assurance guarantee, and also resell it or get revenue from it as they sell their product to their customer, because in manufacturing, you always have To look at your supplier, supplier and your customer's customer, because ultimately you have to come back so on the digital twin, you have to be able to connect the dots between each part of the twin and understand the ramifications of Not a daisy chain or serialized system, but think about a bunch of like, think of it like an ice cube tray, right? You've got blocks in each part of the ice cube tray, but the frame is the digital twin that you're using, where you may not be able to have 100% rectangular cubes all the time, and you need to address that, or you may have variations on the density, the volume, all of that, the digital twin becomes your operating backbone for it Your way, not only to visualize it, but to get information out of it. Because it's no that's,
I think, quite frankly, that comes first before visualizing it. Well, visualizing. I know that visualizing is important for the customer, but you know, this is a case of, you know, are you visualizing the right thing and and what are the different ways of of informing a customer of an issue or allowing the customer to to monitor stuff? It's, you know, you know, I I'm not. I'm not. I. I'm not dissing the whole notion of visualization. It's important, but it's it's almost a it feels to me like it's secondary to the problem that you set out, which is the anomaly detection. It's analysis, the the learning aspects of it, the memory aspects, including local actions that might be taken to mitigate or remediate problems over time. That is where you know, that's where the that's where the money is, as you were saying, and that's that's really damned important. Um, it is, but it's really, it's a really tough it's a Who do you see the the operator of the digital twin, the collector of the the creator of this environment. Is it the, is it the enterprise that's manufacturing? Is it some? Is it a collection of of, who does this? How do you how do you link? For example, you know the the suppliers system with the next customer's system down the down the chain, supplier, supplier, customer.
Well, first of all, okay, so anytime something is made, if you go all the way back to ideation, okay, you have an idea for a product, a new widget. Let's call it a three dimensional widget that can do more than you know, like does clockwise turns and counterclockwise turns and also flips itself, something bizarre like that. In ideation, you're already using a visual. You're doing the parametrics. So that's a visual medium, right? You're looking at things in 3d to make sure that that widget fits with the other widgets in the in the design of the product. So you have some of the data coming from ideation or early engineering. EDA, cAe, whatever you want to call it, you have some of that already. That information gets shipped to the manufacturer of the ultimate product along
with their order, because
it's the manufacturer ordering the widgets, and their first level supplier may be there may be getting those widgets, or the tin to make those widgets from another supplier, right. So that's two steps back. The manufacturer has the responsibility of meeting their customers bomb or approved vendor list. So that's a conformance issue, that's a provenance issue, that's a lineage issue. They're the ones who then take that data and start their manufacturing process along the way. They're going to have recipes, tolerances, changes, and all of that dumped in real time onto the shop floor, because at every step of their manufacturing or production process, they have a QA,
Q and A right. So
they have to keep quality assuring with each step and each new piece, part isn't the
isn't that within defined metrics. I mean, it's there's what early detection on that makes a lot of sense. But are you know, are you saying that there could be things that are within spec but are still problematic, or will become problematic.
Yes, okay,
that's that's all the time. Okay? Is it? Is that the bounds of being in stack did not mean that a part isn't going to cause issues downstream, and it's yours, and it's detectable. It's detectable if you can analyze the data effectively to catch those issues, yeah,
and that's where visual inspection comes in, right? But
it could be more than visual it could it could actually be the, you know, if you're, you're machining apart. It could be the, you know, the temperature readings off the machine, machine. It could be out of spec, right? So you could be like, hey, this part's perfectly in spec. Everything's great. But when I, when I milled this piece, my, you know, the machines, the temperature readings or the right were out, were anomalous, and therefore I have a some, some concern.
You not only have. Concerned, but you have either waste or rewrote
and those and those anomalies you can't note. Now, here's where the training comes in. Okay, yeah, the training comes in because now you're saying, I'm going to feed that into a model. I'm going to go back over time and say these anomalous readings, which is a huge data set, but hopefully that AI should be able to pick up anomalous readings that then led to bad parts.
Where have I seen it before? Where,
oh, my God, by the way, I see it okay. Part
of this getting into the supplier chain. Every time you get an alloy from one supplier goes to the next one for a machine part. That part then goes to your alt, your the manufacturer that we're talking about. There's a there's an accountability, there's a chain of responsibility that you, you basically saying, all right, I'm, I'm laying this at the doorstep of the first, the first manufacturer who, you know, who messed up the alloy that's being supplied. And you know, that's where I that's where I end up going back to. But you've got so many different points of potential sources. It's, it's, really, that's a so well actually, come back to that, go back to that question I asked Joanne, do you? Do you assume that these systems are, in fact, conversant with one another. Do you ever or is there, is there kind of a nice, kind of boundary between them, and and, and an acceptance of all? Right, here's the data I need from you, but I'm not reaching back into your systems. What's the I mean, logically, where? How far back can you go? Yeah,
so let me answer. Give you two streams of answer. If I reach back to a supplier system I can reach back to, let's say, where my order number is and everything that happened from there. If I reach back in my own manufacturing facility, I go to my historian, because that's the thing that historian is for which tracks everything in time series, that's time series database, right internal, and I can put another piece in there as well to create that Providence or lineage of When did I receive it in inventory, what documentation, what data came with it? So I know if
and was there, and was there an attestation with that correct okay,
and, and were they in compliance? Because I have we compliance W, EEE, I have Ross compliance, R, O, H, s, I have tariff. I have, uh, specification based based on the customer's requirement. Because, remember, as a manufacturer, I may also be playing middleman and selling my own parts or equivalents of parts on an approved vendors list that I have a contract price for, so I'm going to make more markup, right? And there's a lot of hidden or dark data or dark metadata information that goes into that
would you? Would you then share that with a with, with the end consumer? Is that is it, does it add value to the part to bundle and share some of that information? Yes,
it does. Because think about, okay, so, uh, most recent event, you know, the battery fire in Korea, the lithium lithium batteries. Yeah, yeah. So, if they had known if they had this kind of cyst, yeah, there was a huge fire. 20, many people were killed, many people were injured.
Something about it, but I didn't. It didn't. It didn't,
yeah, just in the last week, yeah.
Or think about the bad batteries that happened maybe 15 years ago, when all of the manufacturers were using Mitsubishi batteries for the laptops, and they were exploding on flights, and you got to a point where you couldn't take your laptop on a plane with the battery side of it. You had to put one in cargo and one with you, one.
And that was literally a manufacturing defect. Yep,
it was, yeah, because when the power. Metrics were done the test what you know, it's three parallel streams, right, electrical, software, mechanical, when they came to test, when they all converge, and they come to test, the heat tolerance of the battery was not in spec to what it needed to be for the mechanical and other parts so but they passed them anyway. In other words, 10 million batteries were recalled across six different brand name manufacturers because one company, one supplier,
screwed the pooch
and nobody picked it up along the way, until you started having the laptops catch fire on planes, on cargo planes, the batteries, not the laptops, sorry, but every manufacturer,
the thing that made it work is that the laptop manufacturers actually had to because delta, I was at Dell, or talk to Dell people who were involved in helping get this, and they had to actually coordinate, because some of the other vendors were like, oh, it's, you know, HP was like, both Dell laptops, we're fine. And Dell had to work behind the scenes to say, this isn't, you're gonna, you're gonna hit, get hit with this too. We have to coordinate, because it's a where it's a common supplier. Yeah, well, data it took. It took a long, long time to get, get the supplier to so. So what you're fundamentally saying here is that the that there's a lot of data, more and more data is being collected every day in these manufacturing processes, our ability to analyze that data is going up, which should ultimately create better product outcomes. It should, but should,
but you can't. You can't sit there and place the complete bonus of holding all of the data at the end of the at the end of the manufacturing chain, because the last, you know, the last, the last manufacturer that touches it, has got to have a means of
right, Going backwards,
backwards in time, for for for purposes of provenance, not lineage, but lineage, eventually, if you're going to solve the problem, but the first order of business is on who's watch, who's got accountability, who's got responses.
And there's big money that changes hands through all of this. Because every time, if I go back to the topic of the digital twin. So let's assume that I took a simulator and I modeled out my factory and my value chains and my supply chains and my downstream customer and my customers, customer, if you're if you rob my customers, customer and rich you are my customer, then I can create value for you to then create more value and create a flywheel effect by giving you or helping you get to a point where you're publishing data to rob that Rob can then feed back to you, which you can then feed back to me, and close that loop all the way around, so that my the autonomous vehicle, the battery, the laptop, whatever product the as a service part benefits you and benefits everybody all the way back through the chain, because
therein, therein lies a really difficult to solve because, well, let's put it this way. It hasn't been done before. Basically, the move of data, you know, the return the return path in a manufacturing supply chain, almost non existent. The only time you get information of any kind is your product blew up. You're getting, yeah, yeah. You're getting warranty issues. You're getting you're getting exception issues, you're not getting anything that doesn't count as, you know, anomalous, or
it's Yeah, even if it's
anomalous, it's like, okay, I'll live with it. This is the amount of waste I'm willing to to accept. I'll throw that into the garbage that mount in the garbage bin, and keep on doing you. You're it's the the bat channel. Yep. You know back channel. Yeah. Is the return information that, once again, I had. Have you actually seen anybody do that in separate kind of manufacturer, supplier, supplier relationships and supply chains,
yeah, so Siemens has part of it. Mercedes Benz in factory, 51 Bosch. There's a few that have and Lockheed Martin. A few in aerospace, a few in automotive that are
they? Supplied back to their suppliers, detail their kind of detailed information.
Yeah, they are, because they're, they're going after two big goals. One is reduce the cost and the time of, you know, in process of manufacturing and produced for the customer of one and the produce for the customer of one says, Be as automated and low cost as possible with the preferences and variations of the end user, consumer in mind, Because that's who we're really manufacturing for, is the customer of one. Because everybody wants something different, and nobody's prepared to take a you know, mass manufacturing, here's what you get. It's blue. Live with it, right? Things don't happen that way anymore. So they want that, but more importantly, they want to start adding as many value added services and capabilities as they possibly can, because manufacturing margins, you know, and I know and Rob knows, are super slim, yeah. So if you're making a quarter of a penny on something, you have to do mass volume, but you can't do mass volume anymore. It doesn't work as a business model and
but what that also says is the cost of this, data sharing, data consolidation,
digital thread
system has got, has got to be itself so cost effective as to not eat once again, the you know, whatever, whatever margins you're going to get off this. So, yeah, I hear you. I hear you. That's Yes. So is your is your problem? Statement is your chart, is your charter to do this for one manufacturer, for a chain of independently operated manufacturers, what? What's your charter for this. So
the target, the target is, call it multinational, with many operating companies where each factory will be similar, but not identical, because no two factories are ever the same.
Right? Organization, there's an umbrella organization, umbrella manufacturer,
yeah, yeah, call it. Call it, I don't know, call it a GM or a Ford or a glass or whoever you know, got it. But then also to take that whole model and miniaturize it on a micro level for an SMB or mid tier that has less arduous requirements, the same basic digital thread needing to be created, reversing that digital thread, the same method works across different segments of manufacturing, whether it's industrial equipment, compute electronics, then semiconductor huge. Automotive is huge, aerospace and defense
when, because multiple, when you've got multiple organizations with a cross is organized, that's in my mind, there's so many places right there where you're going to,
you're not hub data
timing your your latency for for stuff, is going to be best efforts. I mean, it's, there's, there's almost no way you could guarantee that without really, some very, very strict, agreements amongst the systems, amongst the players and their systems. So I guess my question there is, yeah, you might be able to use, you know, MQTT and and I. And protocols like that. But, oh, man, this is, that's a, that's a that's an invitation to a rat's nest. I'm not sure. I don't care about that one. I want to think
about, okay, so from a, you know, there are, first of all, on the OT side, there are companies that specialize in OT security, and there are networks out there with specific protocols attached that can deal with most of the latency. The business side of that is, yes, there's a need in the market, because the suppliers are flat, the manufacturers are flat, plus their labor short. So they want, they're looking for economies of scale across the value chain. More than just me,
it's us.
So they need, they need to do it for a variety of reasons. So there's definitely
this is this is why I come back to the notion that these, these data stores, yeah, either on a on, on the individual equipment level or on the organizational level, have to be in some sort of Federation model, yes. And that these, whatever streaming you're you're doing, whatever emphasis you're going to place on time, has almost got to make those streams basically databases themselves, and very selective, dynamic, I should say, Yeah, dynamic for sure,
and it's also, you know, kind of transform on the fly in in transit is one part of the model archived, is another part of the model. And my big question is, this is a federated architecture with multiple distribution points. However, it also has to run from a cloud to a micro microcontroller, because of the robots, because of the PLCs that may not have a better way of getting the communication in millisecond time. So you're going to have to go up and down very quickly. I would go microcontroller to edge, edge across, and then only store in the cloud what I didn't need, my extraneous information, my historian, my whatever I don't need more than a few minutes of time, because the whole process, start to finish. Of the main manufacturer is going to be seconds, right? It takes 100 seconds to put any
time you've got a hop, you've, you've, yeah, you, you buy into that. But it says it, it adds to the requirements of your your federation, your data sharing Federation, amongst either equipment or amongst organizations, yes, even within a an umbrella organization, much less across suppliers. So
here's my question to you, Rich, how can I do this parallel using a parallel architecture instead to save time?
Well, there are couple things. One is you don't want to use the you don't want to be relying on the you know, microcontroller to edge to cloud to get you just pointed out you can't use that, the the arrangements between the data shares, the data set sharing as close to the microcontroller as possible have got to be, once again, policy based. And I don't just mean access policies. I'm there a lot of lot of rules that end up as policies. They're not necessarily symmetric, meaning some people get access to a lot more, or some devices or or systems get a lot and it's got to be in a mesh or a fabric. It cannot be. It can't be a kind of a hub and spoke or centric circles. So it's got to be. It's got to be a fabric of some sort, like a switch. Think of it, almost like a switching fabric or or a data mesh. Or. That would be the only way.
Wow. What a fun conversation. It is. Amazing to listen to Joanne really dive deeply into the challenges and opportunities facing manufacturers. It's really an insight into a market that me and it don't get to touch as much as I would like. I'm actually trained as an industrial engineer, so factories are near and dear to my heart. If this is of interest to you, and if you're still listening, then it probably is, please come join us. We want you to be part of the conversation at the 23rd cloud. You can see our schedule. Be part of our book group and discussion. Have a good time. Bring your insights and questions. We would love to hear from you. Thank you for listening to the cloud 2030 podcast. It is sponsored by RackN, where we are really working to build a community of people who are using and thinking about infrastructure differently, because that's what RackN does. We write software that helps put operators back in control of distributed infrastructure, really thinking about how things should be run, and building software that makes that possible. If this is interesting to you, please try out the software. We would love to get your opinion and hear how you think this could transform infrastructure more broadly, or just keep enjoying the podcast and coming to the discussions and laying out your thoughts and how you see the future unfolding. It's all part of building a better infrastructure operations community.