Wolf, get away from those sheep. Bollocks. You're listening to the wolf and the shepherd podcast, broadcasting from Fort Worth in the great state of Texas. Now get ready for this episode of the wolf and the shepherd.
Welcome to this episode of the wolf in the shepherd today we have with us Kordell France, Kordell great you could join us today.
Thank you very much to you and your listeners for having me. I appreciate it.
Now Kordell before we start, I want to address the elephant in the room display. Where? Well, I'm addressing it.
Oh, you didn't even tell me there was an elephant in the room situation? Is the theory. Theoretical? Oh, no, I know. It's not an elephant. The what is the elephant in the room?
Well, the elephant in the room is that whenever you talk to anybody who has a big foot into AI, right, you know, they've really had an impact with it. Okay. The most frequently asked question online is from nerds asking how far away am I from getting a robot girlfriend? So I think you need to address that before we go anywhere. Because there's a lot in other words in get it out of the way. Yeah. Okay. Yeah. I've tuned in purposely to hear that answer. And I didn't want to make them wait for like an hour before they got the real deal on their albums it's gonna take
so one to three months depending on shipping. And, you know, the ship the ship shortage might input impact that a bit but I think within three months for sure.
What, what what you mean, they already exist. And this wasn't on about shipping times. I was more on about when I could like go to Costco and like, go and buy a robot girlfriend,
but you're you're saying they already exist?
Sure. Yeah, we build them.
Okay. Oh, I mean, wait, we were about to just go ahead and say, well, that will do it for this episode. I've got some online shopping, I need to do some
cash out the cryptocurrency.
Yeah, well, I'm start off with a false sense of confidence. Yeah,
we see I always have this kind of issue with AI being perhaps, you know, used for having a robot girlfriend or boyfriend as I speak. Because most of the things people complain about are things which are born from humanity when it comes to personality. And so surely, just having an AI replicate those same annoying habits and behaviors wouldn't really put you on much more of a front foot. Now, we know what a lot of people want the robot for. And it's not necessarily for not necessary for the robot to speak, particularly in those situations. But when you're talking about robot companionship, and everything, when you're looking at AI from the angle that you are, where you can actually influence it. Is that actually ever a thought that you know what, we don't want this to be too human. Because the big problem in a lot of, you know, social circles is people being human.
Right? Yeah, it's, uh, it's interesting, because with AI and robotics, we largely build those to be non emotional, right? Like, most of that's built to just be completely logical and to follow, uh, you know, only we limit its range of capabilities and what it's able to do, and if you bring emotion into it, and you bring, you know, the ability for something to have compassion back, that kind of adds a new element of things that, you know, for engineers and practitioners to build into, like, we already see people being attached or showing compassion, not necessarily attachment, but compassion towards a Roomba, or, you know, some sort of robotic like inanimate object in the room was not showing any affection back it doesn't. It doesn't care what you what you think of it, right? It's gonna do its job, it's gonna go back and recharge, but being able to, like, think in that term, it's like, Well, should we? Or can we first off and then if we can, should we does that lead if we build, you know, compassion and love into it? Do we then build something into like, you know, does anger come next? As you know, jealousy, retaliation has come into it next. It's kind of a it's a very interesting topic. And you know, that the emotion might be the next level of AI.
Yeah. And I guess, you know, before we go any farther, we need to know a little bit about your background. You're the founder and CEO of secret technology. So can you at least give everybody the background so they know that we just didn't, you know, get a random dude on here and we decided to talk about AI We probably should have led with that. And we kind of, we kind of mess that up a bit. So kind of give us an introduction of you, your background and everything. So everybody knows you're just not making this stuff up.
Short. So I originally started on a farm, I grew up on a farm. My father is a brilliant man, he's a farmer. And he first exposed me to robotics in kind of autonomous systems with autonomous tractors, he brought home some self steering software one day that allowed these giant tractors to steer by themselves. And you know, as a toddler, seeing this thing, act in its own accord, without me touching the steering wheel, it was something straight out of sci fi movie, you know, at that age. And so that kind of catapulted my interest into AI and robotics. And so I started, you know, pursuing that path and through school and everything, bring us up to today. And a few years ago, I decided to leave my position at a defense company, working on missiles, to autonomous systems and in defense to start secret technologies, which is an artificial intelligence company focused on developing artificial intelligence that are, we're focusing on three different domains, mobile AI, Explainable AI and ethical AI. And yeah, that kind of brings us brings us up to date. So we do secret technologies does some work in a variety of different fields. But we've gotten, we've we focused primarily on security and the medical fields.
Now, when you first started, you know, getting involved in, give it a better term technology, because as a kid, you know, you don't really dissect things into different categories. Was it something you instantly fell in love with? Or was it just a convenience thing, because obviously, you're a lot younger than we are. So we knew a time when, you know, there were no cell phones, no home computers. And we, I guess, we just saw technology as something that would make our life more easy and more interesting. But we never came from a position where we were born, and it was suddenly there. So everybody is exposed to it, you know, almost at once. But did you fall in love with it, especially wanting to know how things worked? How you could change things? And rather than just enjoying them, you perhaps wanted to take something further?
Yes, the short answer is yes, I did. You know, I've always, I've always, you know, I've always been fascinated by things that act in their own accord, things that are automated know, things that just act in autonomous fashion. And I've always been fascinated with finding order and chaos. So if you have, you know, a set of data set of, you know, a seemingly like a puzzle, how, you know, how do you build something that can solve that in find the order within something that looks like random noise. And, you know, math is a good is a great vehicle for that, you know, being able to say I can put an equation to all this data, and, you know, it's able to find order within chaos. So I really excelled in math, and really enjoyed math. And I still do, and so much, that was what I originally majored in, in college. And, you know, AI and robotics is really built on that foundation, it's really finding order within chaos and trying to navigate to a certain solution. In you know, as I, as I got more into it, you start building these autonomous system, you kind of start to learn more about how your own brain works about how the human mind works, and being able to perform certain tasks. So which is which is quite fascinating, you know, in order to break steps down, for I was thinking about this the other day, for, because we're working on a robotic arm right now. And to just like, all the all the steps that are need to be taken just to rotate a screwdriver that you don't think about, you just pick it up, and you rotate it and go. But to be able to program that or to teach a robot how to do that on its own accord. There's so many different levels of cognition that you have to think about. And it's it's kind of, it's interesting, it kind of allows you to realize how magnificent the human brain really is that we can accomplish so much without even having to think about it really. Yeah,
so you mentioned earlier, the three areas of AI, can you go over those a little more in detail, kind of bust those out for us those three different areas and kind of a explanation behind what those areas delve into?
Absolutely. So those two areas were mobile, explainable and ethical AI. So our emphasis on mobile, artificial intelligence is being able to make artificial intelligence that isn't tied to the cloud. So AI models these algorithms are usually pretty big. A lot of times they have to live on the cloud, and you know, your phone or autonomous vehicle or whatever has to send data to the cloud in order to make a decision or to to to get a sophisticated result back, we compress everything small enough to fit on the device, so that everything can operate in real time. And it can operate small enough on the device. So whether that's your phone, whether that's, you know, a robotic arm, or whatever that is, it's, it's small enough to operate on the device itself, but also protect privacy, right? Because there's less exposure of your data being too Too bad actors by not having to transact the cloud. So that's the mobile aspect, the mobile tier, the explainable tier is really, probably the biggest, the most important of the three. And that's being able to make artificial intelligence that can explain itself just as a human can. So for example, if I did, if I have a, an AI algorithm, that can that screening for cancer in chest X rays or radiology scans, it's hard for it's hard to trust something that is screening through all these images that says this is this indicates there's evidence of cancer, this doesn't this does this doesn't. If it can't explain why, right, a physician can explain why because it can point to certain things and say, Well, I know that these areas should have you know, should be this color, or they should have this shape, etc. AI does something similar, but it's kind of like a blackbox in nature, in that it's really hard to dive into why it made a certain decision if things go wrong. And that's, that's really how we that's something we need to work on as practitioners and engineers of AI because that's what allows people to develop trust in these algorithms and develop trust and autonomous systems and robotics. So and the third tier is ethical AI. So you a lot of you, I mean, you've probably seen or been exposed to issues with facial recognition in the news. And there's issues right now and which facial recognition in accurately or maybe it has a very bias approach with certain ethnicities, right. And that can be handled, it wouldn't dive into that for a second. It's basically it convicts, you know, if you have a facial recognition system, that screening for potential bad actors or potential suspects, it's basically it's convicting one, one ethnicity over the other slightly more, and it's unfair. And at that point, you're you leveraging AI that can, really has a large effect on the rest of people's lives if it's wrong. And so you know, that's, that's an ethical issue that can be rectified by better review of the data that the AI is trained on, and better review of the algorithm. So we take a particular interest in making sure that we heavily review the data that goes into our algorithms by having multiple types of people come in and look at the data, but also performed significant stress testing on the algorithm, before it's published into into, you know, into the field. So I'll pause for a moment. If, if you have if you want to zoom in on anything I just mentioned, but those those are the three tiers mobile ethical Explainable AI.
Yeah, just referring back to when you were talking about the amount of work that goes into programming a robot arm to do, you know, fine motor skills, like unscrewing a screw. With a human, obviously, we understand right off the bat, about proximity how to hold the screwdriver amount of force applied, which direction even to turn it but with self learning AI systems? Are we at a point now where we can just say, to a robot, which has some form of mobility, go over there and unscrew that screw. And it's already learned that okay, I've got to take into account proximity, I've got to put this much force in it. And you're not having to write very precise code for a very specific action that this robot arm can now unscrew any screw, if you just tell it which screw to unscrew because it understands the concept of oh, I've got to be close, I've got to put this much pressure. I've got to turn it this many times.
The short answer is yes, we are there this the but it's it has to be done through learning. Right? It that's the most effective way to do it. Because as you mentioned, going through and writing specific lines of code to say I mean, because you really have to break it down as a human does. We, when we're when we're attempting to screw a screwdriver, we're not saying you know, okay, raise arm 10 degrees, open, palm, grab, screwdriver, close, you know, with this much force, etc. We're not thinking in those terms. We've learned how to do it. It's just familiar. You have to teach an autonomous agent, a robot the same way. It's got a you basically show it several examples. of how it should be done, incorporate it with the level of perception. So different types of sensors that allow it to gather the information it needs, such as cameras, or you know, depth sensors to see how far away it is. And allow it to basically learn how to do it on its own through these machine learning algorithms. That's the most effective way. Because if you if you do hard lines of programming, you just run into conditions it those lines of programming won't satisfy all conditions, it'd be a nightmare. And then you try to run it on a different robotic arm. And it's not going to satisfy the same conditions either. So learning Yeah, it has to be employed. And these high tech algorithms are really the only way
to do it. Yeah. And then, kind of leading into that you talked about like cameras and depth sensors, I think about like, radar, LiDAR, sonar, things like that. Seems like everything now is shifting, though, towards the camera. And using a camera versus all of those other sensors, you know, like, way back in the day, like, I have a robot vacuum cleaner, and you were talking about it, you know, has no emotion, but I yell at it whenever it goes in the wrong place. And it doesn't listen to me, obviously. But everything's now switching to cameras to get that data, which, ironically, is kind of like our eye, right? Our eyes are cameras, and they're just, we see something we translate that into our brain and we make that decision. Is that the absolute best way to go? Like, you know, we've had cameras a lot longer than we've had radar and everything else. And it's almost like we're taking an old technology versus the newer technology and the older one was better.
Yes, it's an interesting question. As a case in point, you know, Tesla has completely gone away with radar in their cars. And they, they're, they've just, they're now shipping cars with only cameras, because they think they can achieve fully autonomous driving just through cameras, which I don't think that there's no other car company doing that, that I know of. That's that's that confident in it. Honestly, I think they're right, I think that we can achieve human level abilities with just cameras. But, you know, especially in driving, you don't want to just match the human, I think you want to go a level beyond if we can say that, you know, this vehicle is safer than a human, it can drive better than a human. It can, you know, drive through fog, you can drive through the dark with superior ability. That's where I think radar, LiDAR, those different types of things come in to achieve those superhuman performance, because you're right cameras are effectively they function in a very similar way as our eyes do. And if we're only trying to replicate our eyes will really not be any better than humans are that much better than humans, in my opinion, by strictly through cameras. By taking advantage of these newer technologies, we really can, you know, achieve superhuman super level of performance by cooperating them. I mean, because because I always think all the time, I'm like, because I have to slow down in the fog, we all have to slow down when we're driving through the fog. Even at night, it's it's a little hard to see I've got glasses, right, I'm already you know, my cameras are already handicapped to begin with. But how cool would it be to just add, you know, it's raining, it's snowing, whatever, and your your car does not slow down or make any changes at all, I can just continue going. I mean, that's, that's the type of thing where I think it's important to try to really take advantage of newer technologies and integrate them with human level abilities.
Yeah, I think, you know, with the example you gave about infrared, you know, the most basic use, you know, perhaps was security with infrared used to be, you could tell if somebody had entered a building, because they break that break that infrared link, right, but shifting where we are now to cameras, and you could see who it is on your phone was before all the infrared told you was that somebody had entered into a building, whereas now you can see who it is. But eventually we're going to get to the point. And we may already be there where the AI can see who it is going into the built in recognize it's Aunt Mary, and it's okay, I don't need to alert somebody or it can see that face and actually match it with something it's seen in the news about some burglar who has been doing a lot of home invasions in the area. So I mean, the camera is very logical in that respect, because I you know, the I think it's the only sense which we trust, regardless of what everything else, all our other senses tell us because they say, you know, see when I believe it or not, you know, believe it when I hear it. And so, you know, we take in most of our information, most of the data are now decision making based upon what we see. So I guess it's no surprise that you know, as the future of AI increases, what the AI sees and interprets the quality To have that and the amount of data coming in, you know, has to be as much as possible because that's how you make the best decisions by collecting in the majority of the data. But are we? Are we ever going to be at a point where a split second decision, regardless of the level of AI, regardless of how much backup information it's got to refer to, it's ever going to replicate that gut feeling that a human hours because that's something you really can't program into AI? Even if it does become sentiment self aware? There's that humanistic almost like, I don't know, yeah, gut feeling thing, which isn't really a thing you can prove scientifically, empirically in any way. But how would you get perhaps AI to replicate something like that? Or is that just truly unique?
In my opinion, I think it is unique to humans, I don't think I don't think we'll be able to replicate that. If we can't, I don't think it'll be for at least another 100 years based off of current technology. That's, you know, that's, that's kind of a blend of emotions, right? Being able to have that gut feeling and that instinct to know what to do, even contrary to all the data that you're feeding it. I mean, even even AI feeding it all this data through all these algorithms, it's still making a decision based off of logic, right? It's a lot of logic, and it's learned logic, but it's still trying to make the best decision. We can be fed information as humans, and still have a gut feeling to go against it and to do a different thing than what are the data showed us? Right? So that's, you know, I don't I don't know what even goes, what goes into that maybe nobody said it's, you know, even mentioned, it's very hard to prove. And that is, you know, what, what, what causes us to kind of go against what we're observing and what we're seeing. I don't I don't think that that's something that we'll be able to replicate AI with AI for a while.
Well, having said that, I want to tell you the ultimate theory I actually read recently on that was that gut feelings is really based on levels of chemo tropism. Right, how we react to our environment, and our senses react to the environment. And so if you had a robot with skin, which could tell temperature, could read the emotions on somebody's face, whether they could pick up, you know, spots of anger, which were, which, you know, is coming across in body language, not vocally, that AI could actually make as pretty much as good guess, what a gut feeling really is, or what causes a gut feeling to be able to be convincingly be able to replicate it. And that was an argument in I read, like, literally two or three days ago on a long car ride. But once we, once we conquered that, that it feels human, rather than just can convince us with its intellect and its ability to be able to hold a conversation, is there always going to be that barrier that because it's not really going to make a mistake, that we're never going to truly kind of like, be convinced, because again, that's a very humanistic thing that we make mistakes. And even though we might learn cognitively, from those mistakes, we still repeat those mistakes. And when people ask, Why did you do it? And I'll be like, I don't know, AI is never going to be in that situation, as it were, it's going to do something ridiculous. And we say, Why did you do that? And it just shrug, it's metaphorical shoulders?
That is? Yeah, that's, there's a lot I want to say, you know, I want to try to break it down. Because yeah, it's you put it you put it brilliantly. That's part of the explainability that we trying to build into right as being able to explain, you know, these algorithms and how it behaved now, this, what you're saying is on a very, you know, is much more accelerated level than or working on right now. But, you know, I spoke about how bias can is actually something we're trying to read, right? That's actually not how that's a bad thing. But biases are what make all of us human, and really what make our personalities effectively, I mean, our personalities are effectively an equation that has that's, that's a function of all these different hormones and parameters and our experiences, and how our brains are wired and everything, all of our knowledge up to this point. And if you start, which is effectively a bias, and if you start incorporating that into robot, that allows it to kind of make that gut feeling, you know, it might be able to incorporate its own bias or to make that gut feeling which we all are effectively it mean, to me if you know, if I have an instinct or gut feeling to do something, it is kind of a consequence of my experiences and my biases that allow me to go against the data I'm presenting and say, I'm going to go do this instead. And so, you know, if you incorporate that same principle into a robot, you might be able to replicate it in that regard. Now the body Language in and being able to interpret everything about a human's everything about how a human is communicating is is tricky, right? Because there's several, even though I'm saying something body language, I think is like, I was reading a paper a few weeks ago that they were claiming the body language is something like 60% of all our communication, which is quite astounding, right? I mean, it's like, throw voice recognition out the window and just read body language to some degree, but being able to take, you know, voice and look at people's eyes and look at, you know, how they're behaving. And their actions, you know, preview before they said something, and after they did something kind of being able to make sense of all that, you know, that's, that's a very unique human ability, and something I think, is going to be very hard for us to replicate robotics. emotion recognition itself is kind of a hot topic right now in AI, and something that people are kind of going back and forth on Well, is it something that's needed? Yes. Because, you know, actually creates intelligence, but no, because it could be abused, right? If the robot or artificial intelligence can read human emotion, it could take advantage of us and exploit, you know, those those emotional vulnerabilities within each of us. So it's a, it's, it's, we're at a kind of a critical point, it's like, do we continue down the development of that path? And continue to do so and, you know, make sure that we can do so in a very smart manner and make sure it has a positive outcome? Or is it going to be, you know, taken down an ulterior path with, you know, bad consequences? It's, it's, it's kind of an interesting entry point.
So, in thinking about it, one of the Wolfen eyes interest is robots. I mean, we'd love robots. And we've talked about robots on the podcast before, in one of the stories, a new story kind of came to my mind, it was about a security robot that was in a mall. And the robot was given the ability to make its own decisions, and it decided to drive itself into a fountain and basically commit suicide. And then there was a another article. Yeah, we read that there was a security robot in a park, it was somewhere in California, and a woman came up to the robot and needed help. And the robot told her to go away, and then turned around and started singing a song. And so you know, we love these kind of goofy robot stories, right? But going back to what you were saying, of what are we going to do, you know, should we move forward? Should we do all this? Right? I keep thinking that our robots like toddlers, right? Now, if we look at it as a human aspect, you said, Oh, we're 100 years from this, whatever, are we like in the toddler phase where they're not babies anymore, where you got to carry them all around, right? Like, you got to carry a baby. Now they're able to move now they're starting to make decisions. But even your toddler knows, well, I'm not supposed to go to the bathroom in my pants. My parents have told me this over and over again. But I still do it every once in a while. So would you say we're in the toddler phase and maybe what you're talking about this growing and evolution and everything, we're we're trying to get them maybe to the teen phase?
Yes, with a couple of exceptions. So when I was a child, when I was a toddler, and when each of us were toddlers, we could see a picture of a pickup, a train an airplane, a car, and we could see maybe two or three times and we would be able to just pick up that a car as a car or train as a train, etc. We didn't have to be fed 1000s of images like we do with AI models now. Now, there's different different research that's compressing that so it can more learn, you know, more like a human. But in order for an AI model, like, like something that can recognize things in a camera to actually detect them, you like something that can detect vehicles on a highway, you've got to feed it 1000s of images, images of vehicles at different angles, different types of vehicles. I mean, I can look at a Mazda and know that that's a car I can look at a Chevy Impala you know, that's a car I can look at a Ferrari and know that that's a car, even if he just showed me a picture of a Honda Accord. And we all can write so that's something that we we've we're very good at as humans is learning based on very little data. And we are from the get go as toddlers keeping and AI is not that good right now learning from just very little data, it needs to be fed massive amounts of data. And then you know, a lot of models are locked a lot of artificial intelligence algorithms are locked. So if I if we train an AI algorithm to to check to detect airplanes and cars and trains, and then we deploy that algorithm, it's never going to detect anything else then airplanes and cars and trains. We can go for you know, as toddlers, we can go forward and say okay, see something in water that's floating. I don't think that's a car. It's not an airplane. That's not a train. My parents have it. told me what that is. But I know it's not one of those three, it's something different. Something I models now I was just testing one the other day classified a boat as it was a as an airplane. And it's like, you know that it doesn't have the knowledge to know that this is something outside of what it knows it can't make those decisions. So it's still very behind human intelligence in that regard, and still naive. But it's if you compare it to what it is now to what it was 1520 years ago, it's massively accelerated. And so we can imagine what it will be in another 20 years, you know, this might be this might be a moot conversation, what I'm saying. So it's, it's, we are at the toddler phase, but it's with a few slight differences.
Right. So for the normal person who doesn't really have much to do with technology, and doesn't really research AI, explain why you couldn't code the equivalent of playing 20 questions for the AI AI to be able to tell the difference between a boat and a plane.
So the part of the reason is, like the way in which an airplane is presented, you can see an airplane head on, you can see it from the side, you can see the sky, I can look an F 14 fighter jet, I can see a Boeing 737 You know, I can see all these different varieties. And so, you know, you can hard code, an algorithm to say look for certain things, look for wings, look for fuselage look for engines underneath the wings, perhaps different types of things. But to really get to something, or to show an algorithm that knows everything with, you know, that can identify an airplane in all different scenarios, it's really got to be fed, you know, 1000s of different airplanes in different orientations in different, you know, angles, in different colors, different styles, different models, manufacturers, etc. And it's in order to accommodate all those scenarios, you can't, it's very hard to achieve that in lines of code and just saying, if it's not this, then it's this, if it's not this, and it's this, I mean, you could have hundreds of 1000s millions of lines of code at the end of something that, you know, just recognizes whether or not something is an airplane or not, versus trying to teach that algorithm to learn, allows you to really allows the model to learn when an airplane is truly not in don't you identify certain features that make up an airplane?
Right? So I've gotten a little bit confused then. Because when you originally mentioned it couldn't tell the difference between a boat and an airplane, was it doing this by visual recognition, and then judging by what it saw, trying to ascertain whether it had wings, whether it was in the water, because it has difficulty based purely upon what is visualized and and then trying to, like you said, sort through all the different sorts of planes it's ever seen from every angle, and the same thing with boats to make well to distinguish them, would it not be more simple just to revert back to that 20 questions or expand it to 30 questions with those, you know, a case we've perhaps through those prior questions narrow down that it's a passenger carrying moving form of transportation. Surely at one point, if one of those questions is, does it fly? Or does it float, then that regardless of what it looks like, tells you whether it's a boat or a plane, why is it harder to actually go to that simple kind of sorting method than having to go to something where a computer's getting confused between a boat and an airplane just purely on aesthetics.
So part of the reason for that gets confused, or that would get confused is due to the background and what it sees in the background. So like, a boat appears usually in a massive body of blue water, and an airplane kind of those two, right? But there's very big differences between a boat and an airplane. And you'd have to so at that point, you have to program the computer to say okay is what what is defined by a body of water, what is defined by something being in contact with the ground and not floating in the air, what is defined as clouds and so you're basically you have to define all these different things to say, look for these explicit things, look for water, look for a bay look for a dock look for clouds, the sun, etc. And so you basically have to define the background in hundreds of difference in 1000s of different millions of different scenarios effectively in order to say, Okay, I know this is now not a boat. I know this not only not an airplane, so part of is identifying the object itself, within part of it is identifying what's in the background. Um, you know, AI models are out there. The way these these algorithms work is they're really not identifying a plane per se, immediately. They're trained to break down and look for things like we do as a human. So if I'm looking at an airplane, I know that there's wings, I know that there's a fuselage, I know that there should be Windows, you know, around the cockpit area, I, you know, I know that there's usually a tail fin and some ailerons. I know, there, these are, these are features of an airplane. And in, you know, I know that if they're an airplane can also be on the ground and on the runway, and it can also be in the air, right. And so you're looking for all these different features that then go to different levels of abstraction, and fuse together to say, this is what an airplane is, I get it, okay, all these things make up an airplane. So the AI model is really, it's making sense of all these shapes, and then compiling them all together to say, Okay, this is an airplane, which is kind of what we do, which is a human in order to identify a, you know, distinguish between an airplane and a boat, but we do it so much faster, with only a few different, you know, a few, a few images of representation.
So, I got to thinking about my Alexa at home. And a lot of times I can ask her a question. And she knows the answer. You know, my typical, you know, Alexa, what's the weather today, and she'll tell me the weather and all that. So she's pre programmed for a lot of that stuff. Because there that's the stuff people want to know, you know, and so Amazon knows, hey, you know, people are gonna ask about the weather, and so on and so forth. But a lot of times, I'll ask her something, and then chills like, oh, well, here's what I found on the internet. So what is preventing the AI from doing like, I tell my kids, I don't know, look it up on YouTube, look it up on the internet, what's preventing the AI from saying, I see something I'm not quite sure what this is. Let me look it up on the internet, connect to the internet, make all the decisions and then spit out the answer.
So the short answer is nothing is preventing it from doing that. The long answer is it's very hard to control to basically bound to control the algorithm from going going down a rabbit hole. And I mean, the way AI is trained right now is it's effectively like a tree. So you have to define the number of branches. If I have a four branch tree, I'm going to train the AI model to identify one of four different things. And it can only identify one of those four different things no matter what. So like, let's say, Alexa, maybe started out with four questions. And it was one of them, it could identify it could answer one of those four questions in different manners. And that's it. Now you can program that to learn to grow sub branches and different derivatives of those questions, right. And, and, and kind of make sure that you make sure that it's able to, it's, it's still able to be accurate. But it's very hard to make sure that that it's very hard to control that because it could, it could deliver the inaccurate answers and on data, like if you if you don't, if you don't control Alexa, you don't control the AI algorithm, it could go on and learn things that maybe don't exist and learn false information and Draw Association saying, I know that everyday right? rains, the stock market's gonna go up. Well, that's not a that's not a real Association. You know, you just you drew that from data that maybe was coincidental. And so we kind of have to lock those algorithms in order to make sure it learns effectively, otherwise, it doesn't learn at all. Now you can, there's a lot of research and effort to try to make it so you can learn on its own. And that's the future, right it making is having something that can learn all these different things on its own, outside of what it was trained to do. But that's it is it is a hard problem, because, you know, for several different reasons. But I, I know that was kind of a long answer your question, but it's, it's, it's still kind of unknown at this time.
Yeah. But and I know you want the next question. But what if the stock market does go up, like 80% of the time that it rains? And we've never thought about that, but you allow the AI to go down that rabbit hole? And do all that research that nobody else wanted to do? I mean, isn't that one of the things we're looking at to say, computers are supposed to help us AI is supposed to help us supposed to make our life easier. So I don't see that is a bad thing.
That's true. That's a good that's a great point. Because if we allow computers to do that, if we allow an AI algorithm to actually draw those conclusions, and we believe them, then like as a human, if you told me that, like the stock market's gonna go up every day, it rains like I'm on. My bias is automatically going to kick kick in, and I'm going to say, No, it doesn't. That's not true. But if for a moment if I believe it, and I can go back in if it can explain why, and it can show me the evidence on why then You know, we're more prone to believe that, that explainability is a key factor, because a lot of the AI algorithms that were, you know, are used to learn, you know, maybe the stock market does go up on the days that it rains. They just they're they they make sense of all this data, they're fed all this data, there's all this math, all these equations inside these, these algorithms. And it's hard to go back and say, Well, why why do you think that? Maybe it's true, but why do you think that and maybe it can't explain its decisions. So being being able to that's where that trust kind of comes in and explainability these algorithms being able to explain why they think that there's a relationship that we don't believe exists? You know, that's, that's crucial. But that is the beauty of AI, you're entirely right. One, one aspect of the beauty of it is being able to have it make sense of data that we can't fathom, we can't understand.
Yeah, there is a issue where if we don't immediately kind of think that there is a relationship between, you know, correlation and causation, we tend to throw it out, there used to be a thought that the majority of people who committed suicide around the end of the year, it was down to psychosocial issues, whereas people would get more depressed because there'd be more of a, I guess, civilized calling together, and people felt more lonely, if they didn't have family or had to meet with family they hated and for decades, people thought, okay, that's why there are more suicides that time of the year. But with more and more data that came in, they actually realized that it was due to the lack of daylight, less daylight, less sunlight, and they discovered something called seasonal affliction disorder, where the brain would react very differently and not produce as many of the happy chemicals not seeing as much, you know, daylight as much sun. And so the weather, I guess, climate was directly responsible for that increase in suicides, but for decades, it was believed that is psychosocial reasons why that tended to be the time of the year, when more people commit suicide, so but an AI would actually be able to possibly make that correlation from enough data. And break it down geographically. Okay, so you live in the north of England, where during the winter time, the daylights far less and perhaps the south of the country, and the AI would pick up on because it's, you know, neutral in this, okay, this, there must be another factor, it's not just the time of the year. And when they get enough people have kind of be able to put them in groups of like, okay, this person doesn't have many family members, as people, this person has a lot. But those numbers still don't kind of match, you have to think, well, what's the other influencing factor? And again, humans aren't necessarily that good at finding these other things which have been missing, whereas AI can discover these things, you know, not by intentionally looking, but just as a residual, you know, effect of crunching that data. Now, the reason I said that was because you've recently moved a lot of your AI focus into the medical field, especially for software to make better and faster decisions, you know, which on the spur of the moment, maybe somebody doesn't have that amount of time to collect all that data and make a decision, but also speed up the process during certain things. Could you tell us a little bit about that? And why people should necessarily trust that AI is making some of these decisions that
and automation professionals and or not be afraid of it, right? Yeah.
Yeah. So this is kind of where our own human biases kick in. Right? If I, if I have an AI algorithm, and I tell it to, basically, in the case of what you're talking about, about that, that that medical condition, I say, find the cause of this medical condition. But I don't give it access to data for sunlight in these different regions. It would never draw those conclusions. So because we didn't originally suspect that to be as humans, we didn't feed that data, because we didn't originally stuck that to be a contributor. Now, and that's part of the reason why, you know, back to your question, prior to this, that, you know, why can't Alexa just learn on its own to answer all these questions? Because part of it is the data that we're feeding it, right, because we have our own biases and data we're feeding and now we let if we let an AI algorithm have access to the entire internet, and all of YouTube and just say, You know what, I don't have any specific questions I need to answer just learn, draw associations and draw, you know, correlations between different things. I would love to see the things that would come back with right because it's unbound at that point. And it can do it has access to literally all the information in the world, and no bias in the data effectively or what no bias one has access to. So that's that's kind of an interesting point. Now to bring it back to medical data. I recently had experience with this with our team and that we were trying to find we're working on a product that's that was trying to discern whether or not patients had Coronavirus, given a few different factors. And I couldn't, our team could not for the wife of us figure out a relationship among the data that we're feeding the algorithm that was contributing to these positive patients versus non positive patients. But so I, we fed it to an AI algorithm, and just said, this is all the data we have, we can draw a conclusion to see if you can draw some conclusions from it. And it did. And I couldn't believe that it was a really humbling point, for me, as you know, from an engineering perspective in, you know, from a leadership perspective, because it totally schooled us. And it found some very reliable predictors and some very reliable relationships among data, we're feeding it that we never thought we'd never that we've completely missed. And, you know, where we basically gave access to all the data we had. And so it's in that in that effect, it's biased because we, we were, we were biased in that we only had the data we accumulated, but we gave it everything. And so it was unbiased in that regard, able, it was able to draw some really fantastic conclusions that solve that problem completely. And, you know, that's one aspect where I don't think we should be afraid of it of AI because we, you know, it did some good. Now, the conditions where AI comes in, where we should be maybe a little bit afraid, is not trusted is when it's the last link in the chain. In my opinion, artificial intelligence right now is not good enough to diagnose somebody with anything, whether it's the doctor should always be the last link in the chain for diagnosis. So for example, we worked on, we worked on a product that analyzed a bunch of chest X rays and decided, you know, whether or not they had demoed the chest X rays had pneumonia, evidence of COVID evidence of emphysema, etc. But we were not listening to the chain, we were just screaming to help radiologists and to help basically increase doctors throughput, the physician was ultimately the last one that reviewed these, these, these images to determine whether or not what the ultimate condition was. Because you can imagine if an AI algorithm makes a bad decision, I mean, that has severe consequences. Someone could get mistreated with the wrong with the wrong medical equipment in the wrong medicine, and it could ruin their lives. So we're in there, we're not at a point where AI is that it should only be used, you know, perhaps a screening in to augment physicians capabilities, but I don't think should ever be used to replace I wouldn't say ever. But at this time, I don't believe it should be used to replace you know, visitors capability. That's where I would be I would not trust it at this
point. Yeah, I got a line one of my favorite shows of all time is er, my wife and I, we religiously watch it it almost every night, I think we're on like our 100 round through it. And always kind of think about like AI in the medical field where like a robots doing this surgery, right? And maybe it's calculating, okay, the percentage chance of this person surviving, this has now went down below 10%. So it's not worth the time of having them on the table. Because, you know, expenses in the hospital are bad. And in this thing's been programmed. And so now all of a sudden, it's gonna make this decision on somebody's life. Conversely, then you have people, you know, like me, for instance, where I've always told my wife, you know, don't put me on machines don't do anything like that. But the robot can say, well, we need to put this person on machines, and we'll be able to sustain their life. And my goal is to keep people alive. So is there a way to train that? Or are we always going to have to have that human touch?
That's a that's, that's a double edged sword. Because a robot can't pot like goes back to that gut instinct, right? Like a robot can't possibly have that it can't right now, it's not good enough to interpret all this data in real time. And all these emotions are all these all these different inputs of data, whereas humans, you know, we can process one of the things we have on top of computers is that we can do we can process massive amounts of data from a different, you know, at the same time, from sound insight, and, and smell and everything. And we're able to just draw conclusions in different areas. Now, and that's really hard for a robot to be able to replicate right now. I won't, you know, in the future that might be different, but at least right now, now on conversely, humans have biases, and you know, if I'm having a bad day, my work might be influenced. Whereas a robot isn't going to have that it's going to be operating the same way every time. gonna maybe maintain some consistency? You know, that's that's one area where a robot in surgery or in medicine might be advantageous, but it's yeah, I just I don't think right now it should it should be used as the last as the last link in the chain to make, you know really tough calls because it's not good enough right now to determine who lives and who dies right in a very drastic, drastic scenario. And you see that with defense AI in defense AI and self driving cars AI in medicine? I don't think yeah, that's that's a tough scenario for AI to be in right now.
Yeah, I mean, right, from, you know, the dawn of the computing era, the biggest problem has always been garbage in garbage out. That you can't expect, you know, good results from crappy data, you might get lucky, every now and then. I mean, the sun shines on a dog's ass every now and then. But, you know, for the most part, you know, the success of AI is kind of measured in what AI does, and a lot of people don't look at what goes on behind it, they always blame the AI saying, you know, Tesla crashes as opposed to really well, yes, somebody made a mistake somewhere down the line, or this wasn't, you know, allowed for and it was human error, it was an AI error. Because of the AI out there correct information. You know, the chances of this happening more than once every 10 years would be very, very remote. But equally, there are some things which are unavoidable. And you know, somebody's determined to crash into you or they're drunk and completely out of control of their vehicle. It doesn't matter what the self driving, you know, the autonomous vehicle does, it can't avoid that crash, just like how doesn't matter how carefully a human drives, there's some crashes, some wrecks you just can't avoid. But, you know, we mentioned Alexa a few times, Alexa, when you ask her a question, which is factual, such as you know, how far is the sun away from the Earth? And she says 93 million miles, whatever. If you ask us something where there's not a definitive answer, that's when she comes back with a this is what it says on the internet, because it's very gray. Ground in a way that like if you if you had taught Alexa to trust the science, so to speak, right, and say that she took everything Anthony Fauci said over the last 12 months as fat. And all of a sudden, that's no longer fat. Now, this is fat, this isn't fat. Now, this is fat. That's not a very good learning system for an AI. So if I wanted AI to learn, I certainly wouldn't connect it to the internet, unless it was monitoring it for some great social engineering experiment. I think the worst thing you could possibly do is connected to the internet, because there's more false information out there, I think the real information and sometimes you might have an article which is 90%. factual. And you might have 10% of errors in there. But then how does the AI decide which part errors especially if it might come into something like, you know, you're talking about AI reading medical diagnosis, right, that should be left to adopter. But if I asked Max, do you have a temperature? Obviously, technically, the answer is yes, of course, I have a temperature. Or you ask him what it is? And you said, No, not really. But a doctor might feel your beat. Yeah, that's pretty high. Because it's all down to his own interpretation. And whether it's a doctor or a robot asking, you know, do you have a temperature? Do you have a high temperature? We're very reliant upon his answer without an actual physical test. Now, a doctor might know from experience, okay, there, there are other signs and other symptoms, which, you know, might bring me to the correct diagnosis, if we can't ascertain whether this person has a high or low one. Surely, AI, if you gave it all the experience and knowledge from all the doctors in the world, and, you know, success in prognosis, diagnosis, and all this must get to a point where eventually it gets more accurate than a human doctor.
Yes, so we, there are actually instances now in which we, you know, there are several artificial intelligence platforms that are more were assessed more doctor or more accurately than a doctor. And that, I don't disagree with that. You know, I think that the that's, that's true, right? Like, for example, I think the the last one I read was in CT scans of the brain, artificial intelligence algorithm, beat it beat a board certified doctor, by like, 2% and its accuracy. And, you know, that's a that's pretty magnificent. That's fantastic for stuff. But secondly, you know, it goes back to that gut instinct, can you human, you know, given a bunch of different scenarios and given something it hasn't seen before. You know, can an AI algorithm still do the same thing? Honestly, if it can't right now, in that scenario, I think it will in the future. But the doctor still got I still, I still think it has an edge in in that regard. But yeah, it's, I mean, it's interesting, because throughout time, like, the definition of AI, in my opinion shifts, right. Like, technically, we were like, we started trying to replicate human mobility with the wheel and with horses and with vehicles, and we got superhuman and transport. And then we started to replicate human ability with speaking and we invented the radio right and got, you know, superhuman capability. And then we got superhuman and all these different areas. Now, it's, we're trying to do with vision, right? Trying to identify different things through cameras, and then you know, even trying to replace, you know, super high level cognition abilities, such as, you know, being a doctor, a physician and identifying different medical conditions accurately, I think, want us to get there, I think we do, we will, we will get there and be able to replicate nearly everything a human can do. You know, maybe with the exception of that gut instinct.
So tell us if you can now we don't want you to divulge any secrets. But tell us what you've got cooking there at secret technologies, what, what's kind of some stuff on the horizon that you can tell everybody about you're working on?
Sure. So right now we're working with a company that works on or we're working, we're building the artificial intelligence platform for them. That's for a COVID-19 breath analyzer. So it actually detects Coronavirus on a person's breath. And the screening happens within about 30 seconds. So instead of a PCR, you just exhale on the device, and it gives you positive negative result. There are a host of factors that go into being able to identify that AI plays very well into that. Another thing or another aspect is being able to Taunus autonomously navigate with through vision, but being able to see through smoke in rain and fog and dust etc. Right now, robots are very good at navigating. I mean, we have factory robots that can navigate with cameras very well. But our particular interest is in agriculture, where if you're driving through a field, you're never going to have clear open vision, right? There's going to be clouds of dust, it's going to be raining, it's going to be foggy, you're going to have different things in the way. So being able to use cameras in that regard to help backup radar, LiDAR, etc. That's what we're working on right now is trying to kind of increase, increase the technology presence increase the AI presence in agriculture.
Yeah. Kind of interesting. You bring up the agriculture thing. I literally saw this today, there's a robot that drives around on farms and has lasers that shoot weeds. Have you seen this?
Yeah, yeah. It's quite fascinating. Because if it's the same one, if you're thinking of the same product I am, and literally just drives up and down the field. And this is out sweets.
Yeah, yeah. No, no, I mean, that's, that's what it was. It was like a, it looked like a normal piece of farm machinery, right, but apparently has a little lasers under there and identifies weeds and zaps them.
Yeah. And what's interesting is, like, self driving happened in agriculture. I mean, it started in like, the 90s. Right. And we're just seeing self driving in vehicles on the road on the highways now, in you know, consumer traffic. Now, granted, navigation on the highway is a lot more, it requires a lot more sophistication than navigating through a field. But it's just, there's a lot less investment right now in agriculture with artificial intelligence. And it's interesting to kind of see that it originally had a head star, and then it kind of dropped. It kind of fell behind and now people are starting to invest again, you know, so I think there's a lot of opportunity in agriculture. Really?
Yeah, no, I mean, that, you know, a robot zap and weeds. I'd love to have one of those from my yard. Right, you know, or is it you know, this thing's huge. But, you know, even going back to the advancements in AI, and Tristan brought up earlier about, you know, like, we didn't kind of grow up in that era, so to speak. I remember our first VCR was a two piece VCR and it had a remote control, but it was a wired remote control. And so you had to plug that remote control into the front of the VCR. And I was responsible for sitting there and hitting the play pause button when we were trying to record stuff off TV and cut the commercials out and everything. Now, you know, I'm watching Hulu or whatever it it knows there's an advertisement up there and there's a little countdown and it says hey, you know, here's the advertisement, the robot vacuum cleaner. I mean, I have a crappy one, but it's Bill does a pretty good job. But now they have it where they map everything out and they say, okay, you know, you can say, Alexa, clean the kitchen. And the robots like, Okay, I know I'm here at my little dock, and I'm gonna drive over there, and I'm just gonna go in this area and all that. So I mean, the advancements are huge, but kind of this is gonna sound like such a curveball, but you're a jujitsu fan. You you like doing jujitsu? I keep thinking about robots and sports, right? And there was that show where the robots would fight each other, and all that stuff. And we've talked a lot about cars. I mean, you could have robot racecar drivers. And would everybody get first place? You know, it? They would all if they had the exact same car and the exact same intelligence? Are we gonna lose some of that? Maybe just wondering what's going to happen next, who's going to win the game? You know, if, if you played two teams on Madden, and it was the same team, and they all had the same stats, it's gonna be a tie every time are we gonna lose a little bit of that fascination? What's next? What's coming up by shifting all this towards AI?
It's interesting, because I was just thinking about that the other day, right? If part of the reason why I like going to concerts and go into sports is because I like to see humans that more or less have the same handicaps as me, I mean, from a general statement right there, they've got the they they're able to perform at a much higher level, like they're still human, and they can perform and do some incredible things. And that's like, that's why we like to go see them. If you have AI that has all these superior capabilities, and they all have the same capabilities, you know, would would things be as interesting to watch? Like, what I still go watch a football game, and there's a bunch of robots playing each other that, you know, could throw a 90 yard touchdown pass. Yeah, you know, it's like, well, I mean, it would be cool for a little bit, but being able to kind of relate everything back to a human level to say, you know, the Bron James is still human. And, you know, Michael Jordan is still human. And they did some phenomenal things. And that's like, why that's why we want to go see them. You know, Eddie Van Halen was still human, and could play the guitar, way better than, you know, I would, I won't say anybody because I don't wanna hurt people's feelings. But, you know, that's, that's my own personal human bias kicking in. And it's, you know, that maybe that's where you draw the line is within the arts and sports and maybe AI is better at computing stats for who threw the most touchdown passes, who had the most rushing yards, etc, instead of actually participating in sports. But, you know, if we have aI Olympics and 2040 grand, that'll be that'll be interesting.
Yeah, about 45 minutes ago, I did touch on the thing that, you know, perhaps what will stop AI being too humanistic is its lack of mistakes. But you know, being somebody who's played sports games on consoles, and computers, since you know, the mid 90s, what has made games more realistic has been that inbuilt mistakes, you gave this scenario, if unlike mad and you have the same team play each other, you'll actually find they never end in ties, they it's a team will win. And sometimes a team will actually win by quite a bit because it builds in this error. And it's just like the thing if you know, you toss the coin, you can't guarantee it's gonna be heads, tails, heads, tails, heads, tails, you might have seven tails in a row, one head, and then another six tails. And it's the same thing when you put that type of algorithm into a computer game, and you allow for error. So you know, what should be an easy catch the computer drops, but without that, it would be no fun for a computer to be sorry for a human to be able to play against the computer because we know now, you know, obviously a computer, even the most useless app on the Apple App Store can be you know, the chess grandmaster, you know, that's not not a problem. But if every game we played the computer was just it's best against us would soon lose you know, any interest in playing against the computer would only play against other humans but to make itself more of a companion, you know, the computer has to dumb itself down and I think I still hold to that view that the best kind of AI is not going to have crucial mistakes built in because like if it's supposed to be responsible for looking after you and giving you medicine at a certain time you don't want it to have human error and forgotten because it's got caught watching the TV show. But but for smaller, you know, for small things, just those humanistic things even if it is like a little bit of an annoying habit like waving its robotic foot and being like can you stop doing that please? You know, those types of things are thinking that going to build stronger and probably more trusting relationships with AI. If we do see some of that fallibility within the AI now obviously not like I said, Not With crucial things, we don't want a medical, you know, AI robot messing up 25% of the time because it's old. You know, we actually want 100% correctness there. But I do think, you know, AI to be realistic. It's not about knowing everything. I think there should be a time when it does say, I don't know. I don't know.
I 100% agree. I, you know, and that's, that's beautifully put, because it is okay. To say, I don't know, right, I have a low sense of confidence in, in what I've been able to, or what I've observed. And I just find, I don't know, you know, some companies do build that into their systems to be able to, you know, say that I have a level of low confidence we do at Seeker, but, you know, going back to your, your notion of, you know, kind of being able to build some fallibility AI to make it I don't know, more relatable or make it more human. I think there might actually be some relevance to that. Because, you know, in a different level, I had a pick up when I was in high school, and it I loved it to death. And I look back, I'm like, Why did I like that thing? There, you had all these issues, it was a gas guzzler it, it rattled it, and there was, you know, the AC never worked, right. And I had sentimental attachment to that. And I loved it. Right. And it had it was riddled with fallibilities and issues, right, but you build an emotional attachment to something that, you know, now there's touchscreens and fuel economy that's, you know, completely balls out of the water and can pull away more all these different types of things. And it's like, but I don't have the same level of attachment to that as I did, to this thing that was way more inferior. And this machine. Nonetheless, that was way more inferior. Now you bump that up a few orders of magnitude to a robot into AI? And you know, say, Do we really want human level fallibilities into some of these things? You know, if they're not crucial to its performance, like in a medical field? Yeah, I think that maybe you want something that makes it more relatable, something, you know, that gives us some personality, to some degree, because our fallibilities in some manner actually give us a personality as humans, but you don't want it to be in the objective. It's designed for which is surgery or diagnosing medical conditions, etc. You know, as an engineer, that's where you should draw the line and make sure that there's you know, is zero, a little fallibilities possible.
Yeah, no, that makes total sense. I mean, it's, it's honestly, it's a fascinating topic. I mean, it's one of those things that lots of people are talking about, we could go on for hours, and ask like, 900 more questions, but can you tell everybody how to find out more about your company, how to reach you on social media, all that good stuff?
Sure. So a company is secret tech comm SEJ AR te ch comm there's, you can contact us through there. We do like, you know, free personal consultations just about ideas with an AI to see if you know, we can build a solution for what you're looking for. I post a lot of you know, of our engineering and product demos on LinkedIn and Twitter. Both are under Cordell K France. So you can find me on there as well. And then you know, just just I love talking to people about AI in general. So reach out through through email Cordell at secret tech comm.
Thanks. Hey, appreciate your time today. fascinating topic. I would love to have you back on so you can figure out how to make my robot vacuum a little bit better. I mean, maybe maybe you can shoot me a better one or something like that. But hey, thanks again Cordell. And that will do it for this episode of the wolf in the shepherd and we will catch you on the next one.
Thanks for listening to this episode of the wolf and the shepherd podcast. If you liked what you just heard, we hope you'll pass along our web address the wolf and the shepherd.com to your friends and colleagues. And please leave us a positive review on iTunes. When you get a chance. Check us out on YouTube, Facebook, Instagram and Twitter for additional content. Join us next time for another episode of the wolf and the shepherd