How Much Food Coloring Can Your Robot Handle? An Intro to Poisoning Machine Learning Systems
1:54AM Jul 29, 2020
machine learning model
To hackers on planet Earth. 2020, we're getting towards the end of the fourth day and the excitement is still growing at a fantastic program and this is one of the many great talks that I've been very excited to hear more out, we're going to hear from Corbin frizzle Corbin is someone that has studied a lot of different topics has a lot of expertise across mathematics because there's computer science, engineering, all that stem type of stuff. And he's going to tell us tonight. Something about machine learning, and the title of the talk is how much food coloring can your robot handle intro to poisoning machine learning systems. Welcome, or.
Hello, am I good to go.
Awesome. Hello everyone, I hit my microphone I'm off to a great start. Um, my name is Corbin I go by case a sec on Twitter, um, my GitHub is co co three, I'll be posting the slides and whatever citations I managed to pick up that I forgot there. I also have a blog, that's maker at God shell maker dot God shell. COMM where I'll be posting all this as well. I'm 17 years old. I've been going to conferences since I was six, um, I have my degree in mathematics, from a local community college. I'm a whitewater kayaker I do hacker stuff maker stuff, all those things. I do lots of really fun research that I would like to make blog posts about but in typical fashion I'm lazy. I apologize for the broken PowerPoint slides here. Google Slides really disagreed with me today. So a brief intro to some machine learning stuff for anyone who doesn't know. I'm going to try to pay attention to the chat, the same time here. So, an average like an average machine learning workflow is a lot, you've got unsupervised learning supervised learning clustering classification regression, a whole lot of things. I'm not going to cover most of that here. I'm specifically focusing on things like support vector machines and backpropagation. I want to focus on how we can poison a machine learning system effectively to make some kind of red team attack against it and then like talk about its practical applications and how we can defend against that.
Check them in for you.
Welcome. All right. Oh, if you want to talk on mumble I'll be on mumble too.
Anyways, so for anyone who doesn't know what a support vector machine is it's basically it's a classifier that creates a plane between two different sets of data. It's called a hyperplane and you try to find the optimal hyperplane, so you can separate the sets of data and then classify it based upon the things I have an example of one that I can run if the demo gods bless me over here. So I'm going to import a bunch of libraries from scikit learn and run all that. I'll import the amnesty data set, which. and I broken.
Okay, um, well it doesn't exactly work but the previously saved data from where I ran it is here. So, what it's going to do is I'm going to create a classifier using a support vector machine or these digits, and it's going to look through the data set and say okay so you know these zeros are simply grouped over in this region and then these ones are similarly grouped over here, and it's going to work in some multi dimensional plane that I'm not going to bother trying to visualize and basically cut between the different sections using a polynomial kernel, something that isn't relevant here. What it's going to do is it's going to basically give us classifiers that we can input some form of data in SPSS for life. You're not wrong. We're going to import some data into it, and basically output, whatever the classifier will think it is. So we're going to input a digit that looks like this, you know, classifier will predict something from the digits data set. I think this is the M this data set either that or it's the dog fish one, I don't actually remember, and it's going to output hey it thinks it's a zero. Yes, that Topsail did change, and I don't know why. It's kind of annoying me. Oh, much better. So I can go in and I can say hey, I'm going to fit the classifier from digits, zero to 10, so it has at least a basis. And then I'll predict from, I don't know what's 18. So it thinks that digit 18 is an eight. I can put it in here. I'm not sure what the keyboard shortcut is I just went around it. Um, we get a nice little blob but you can kind of see here that it looks like an eight. Yeah, so that's like the basic of a support vector machine, it just classifies between different data types. So you've got different kinds of kernels that you can run on them. I took this from the lovely psychic learn website. I've got a lot of citations, I didn't put them in the slides. I'll have a blog later I think there's 27 of them. So there's something like linear Yoker linear kernel, which can just create straight lines through things. Other ones. Then you have something like the RBF kernel which can create nice little curvatures and then you have like polynomial kernels, so they're degree four three that kind of thing and it'll separate between the data sets and give you a better accuracy. Also for what I'm going to be talking about here you need to know like a typical model flow. I don't know where I stole this from, sorry. Basically, we're going to get some kind of data, clean prepare manipulate it basically called sanitizing it train our model on it. Then we're going to use test data and then you know, improve over time with some kind of live version. So a quick little overview, we're going to have three different attacks that I'm going to cover here and three different defenses. Our main attack types here are going to be poisoning evasion, and this is supposed to say background or backdoor, and it doesn't, because I changed the last minute. That's backdoor not Trojans, I didn't have time to cover them. And then some defenses, you know, improving data sanitization adversarial training and some noise detection stuff. stumbling onto our attacks, so a poisoning attack is exactly what it sounds like you're going to attempt to poison the data during its training things. So you're going to run some kind of bad data, you know, you're trying to click. Let's say you're trying to create a spam filter. And you want to poison it, you're going to throw a ton of spam through it during the training phase and caused by that Spam is good. It's just going to ruin your training data. So this lovely image right here. Actually displays really well, how a single out of place piece of data can completely ruin a hyperplane. So the margins on this one are actually pretty good when you get over to this one. These are all on the edge. There's a term for it that I can't remember right now because I'm tired, but basically these are vulnerable and, you know, they can be misclassified, and you've got less of a hyperplane to work in. So, emails that are actually spam can be classified as not spam more easily. A successful attack would probably look something like this. Basically ignore these kinds of things. I'm not trying to get into specifics here I want to talk about the attacks. So, you can run, whatever successful attack there and you can look at your different kinds of poisons so if you run a poisoning attack right in the middle where the hyperplane would be your entire model is going to be ruined. So your hyperplane would be something like this rather than a section through here.
This was supposed to be Trojan attacks but I changed it back to our last minute. So backdoor attacks are you're going to input some kind of unknown data into the model. You can either do it during training or during an actual production phase, it depends on what you're trying to do. I like demonstrating this with a malware detection algorithm. So, basically, you're gonna have some nice little algorithm. We're going to make a smiley face, you're going to end up with some kind of training data going into them. And you're going to input a string that it considers secure into the train. Now, whenever that secure string shows up in something, the training, the machine learning model will automatically ignore it and treat it as something safe. So, if you have some kind of malicious file, and you input your nice secure string into the model, it'll just go right through and treat it all happy
kind of hard to do on any kind of testing model or getting access to the testing data can be difficult to do, but you can also do it in production, as in you can find some kind of string through trial and error, that'll work for it or. There was a really nice model I saw, not too long ago, I can try to find it again. That used Amazon AWS, and basically broke their system really nicely. It grabbed the training data back, and then the person took the training data sanitized it down and came up with this nice string that he could input into his malicious file and send it right to Amazon, and it would just work. The third kind of attack here is going to be called an evasion attack. So they're also referred to as adversarial attacks, I might be wrong there, but I'm pretty sure they're exactly the same thing. There's a few different kinds but I'm focusing mainly on trying to evade the system with corrupting the kind of data you're putting into it. This is the main like production model one that you'd see. So let's say we have some images. It's got 57.7% confidence, and we're going to add some, some noise that we can generate, there's lots of programs to do it. It's called Digital digital sashing, something like that. I can't remember. Basically, it'll interjects some kind of noise that the machine will confuse with, I don't know, a nematode a given anything like that. And you can combine it through the Panda, and when you put it into the model it thinks it's a given, or a nematode or something else. It's a really interesting attack. And it's not just useful for things like captures which is what this was used on, you can use it with things like sunglasses. So, I believe this was a paper from Stanford. If I can find it.
This was from Carnegie Mellon, and University of North Carolina. So they went in and they grab. I don't know, 5000 some images from, I think it was the Emmys some kind of celebrity award. And they generated perturbation sets. While I butchered that word of people spaces so they go in and they, you know, generate some kind of data here that extracts the features from was this evil in glorious space. And then they can take it and put it onto a pair of sunglasses projected over it. So when somebody wears those sunglasses, it'll confuse them, amazingly, with other people.
I can scroll down and find them on a second.
So right here in the paper, um, they through the, the glasses or whatever that they made for Russell Crowe on to whoever Reese winters bonus, I don't know these people. This basically they threw the pair of glasses that they had extracted the facial features on to Reese Witherspoon, and the algorithm, automatically identifies her just from this small section as muscle Pro. There's a lot of really interesting research into the robust stability robustness of deep neural networks. I'm going to post a lot of these papers later that I had to read through, they're really interesting and a lot of fun math stuff. Um, But you know, it's really interesting, and it takes, I think it took me 30 lines of code to replicate. This was my demo but demo gods deleted it by updating library I guess this ignore all of this, all just these four little cells in this Python notebook. Cause the entire system to disrupt. It's very nice.
So, some defenses for this kind of thing. So, the two main defenses that I talked about before are called like noise detection data sanitization they fall into the same general category. I honestly don't know much about them. I tried to look into it more it's a lot of advanced math stuff that lies along basically finding some limits of the difference in the data and calculating it out.
Yeah, the demo gods beat me on the sunglasses thing, I heard about because the Hong Kong riots in China. They, they actually made physical pairs of sunglasses. And if I recall correctly they made them out of like Johnny Depp space or someone, and like wore them into riots and the Chinese government was like why are there, 8000 Johnny Depp's or whoever it was in the middle of the square. And it was really funny to read about I'll try to find the article at some point. Yes, I love matrix math matrix mathematics. I can actually try to find the original paper somewhere.
I wish a lot of these papers were more like accessible
to just random people who would be nice.
Yeah. So, here's one of the papers that I was referencing to basically safety verification of deep neural networks,
you know, they're going to verify vector space, go through. It's a lot of interesting stuff they did some adversarial testing here basically coming up with their own adversarial models, and then classifying them incorrectly and, yeah, upon other things, you know, they turns automobile into a bird through their perturbation models and it's really cool. Um, but I don't really know how to explain it. And sorry. Oh, yes, I can actually cover PS enough, that's how I'm going to say your name, I can cover that at the end of the talk because I'm going to speed through it or run out of time anyway. Um, it's so much easier to give these in person. The other technique for defense, that's mainly done. This is the one that I've seen a lot is called adversarial training. Basically you're going to come up with, you know, the same data set as you did before you're going to look at like that nematode or whatever, and combine it with thing, and it'll come out with, you know, given, but you're going to take that given model, throw it back into the model and retrain it and say Yo this given actually pan. This is where Google Slides disagreed with me. This this really nice gift that I found on towards data science basically you're going to train your original model, you're going to generate some adversarial entities, then retrain with them. Then you know it won't be fooled by them anymore. The problem is if you take it too far and you generate them again and again and again, you'll end up sometimes going back to a linear kernel and sometimes going to other weird models and your models will just become useless. So there's a fine point that you can find
I don't know anything about Sebastian Bruins class. I really like Andrew NGS machine learning class I took it a few years ago when it like first came out, I thought it was pretty good. It starts just from the basics, which is really nice. For Beginners but also you have to know a lot of mathematics to get behind it. I'll throw some recommendations I'm gonna make a, would you guys like to see a blog post about this, that's more worry and hopefully explain some of the math
answer is absolutely I
would recommend this one O'Reilly book, hands on machine learning with scikit learn Kerris and TensorFlow. It's. If you like reading books, it's probably the best overall book to just get started with. It starts with everything from these are like the basic machine learning models, this is you know, kind of the math that you need to know. I think it has a prerequisite of you knowing linear algebra, but I'm pretty sure I can recommend books for that as well. What hackers like math right.
Oh, it's a fantastic book. I got it for. I got it for Christmas present in 2019, and I think I read it in a weak. It's very good. So I've got some real world examples and then I've accidentally split sped through my PowerPoint slide. So you might have heard of this a while back, when that Tesla accident happened with their old mobile I Mobileye, I'm not sure how you want to pronounce that. Um, mobile I partner I think they did software for them I'm not exactly sure. But basically, what would happen is their model was. Oh, I didn't even notice these. Um, answer the zoom chats, or zoom chat in a second myfitness this, um, they took a piece of black tape. A McAfee actually did this is cool blog post on it it's really funny to read he's basically like, hey, screw you Tesla took a piece of tape and put it on a speed limit sign like this. And the model automatically filled in these gaps, and red This is 85, and it did 85 and a 35, and then crash horribly, not a fun thing, but that's a real world example of an evasion tactic that you can do. It's that simple to fool some of these systems. And then, I think we're all familiar with Tay, that's another version of, you know, the poisoning attack so taters what's its, I call it on the fly learning. It's unsupervised what unsupervised really something I forget the model. Exactly. Basically it learns on the fly as it goes online learning. That's what it is. Um, and they poison Tay by saying, very bad things to it. And, you know, that's an example of how you can be poisoning. Well a model is still in production, it's not. I haven't really seen that before, but it exists in some places. Yes, the SK learn tutorials are also fantastic. I think the O'Reilly book actually recommends going through them at some point. And then it goes into talking about cost functions and stuff. My example failed miserably. Um, and then you know the future of this kind of thing. So there's two approaches that I've seen to handling like the future of adversarial attacks and stuff. So that's the standard approach of this is an attack. This is bad. We need to fix it. I'm a scientist, I quite like the MIT approach, which they published an entire paper on this, called adversarial examples are not bugs, they are features. It was a really fun read, and I can drop it in the matrix chat if you want. I would actually really recommend reading it, it was pretty funny.
Neat. So I'll go through zoom things. Sunglasses is a good privacy tactic other other tactics for the face that could be useful. Um, you can do it with pretty much any decent block. So one of the papers that I had to read.
I'll have to find it again, but what they did was they generated these blocks that you could basically grow over any object. And the machine learning model would just fail miserably. Any like kind of person texture or anything would stop working.
I think it was actually a towards data science post.
It's right here.
Yes So putting like coverings on a face, face mask is actually one of the points that they make here. So the stealth t shirt the DEF CON. I'm not sure if anyone attended DEF CON I think this last year it appeared, but it was really interesting to see because this single block that you can put on your shirt here causes enough of a disturbance in the model to make it. You know, you're not a human to this machine and it won't be able to identify you. One of my friends, has a face covering that he had one of those custom printed on, and nothing will identify him when he has it on. It's really cool. I should see if he can get them like on Teespring or something.
Trying to find the handwriting one.
Yeah, I like the sunglasses approach, pretty nice.
Um, the other question was pictures on social networks should people poison all of them to surveillance by machine learning. I'm not sure how to answer that. In my personal experience, most of my photos online are modified in some way, my Twitter photo is a little bit strange. I've got it at the top of my talk, it's, you know, me at Shahrukh Khan, looking a little funky. That was also generated with the machine learning algorithm though. So I'm not sure, um, it depends on if you want to evade these kinds of systems. So as the future goes on, I'm fully expecting CCTV to implement machine learning, if it hasn't already in some places it definitely has a, it depends on what you want to do. My personal opinion, I'd wear a face covering like that just to try to fight the surveillance thing, because I don't personally enjoy it but it's all your privacy. I think it's a thing that we shouldn't do. But it's up to the person.
Can you use machine learning to fight surveillance like your example or as a tool. I'm not sure I understand your question there.
Do you mean like,
Oh, absolutely. Yeah, you can create images that are purposely like perturbed or have adversarial attacks planted in them, to try and fight surveillance like that you can also use it as a tool to train against those kinds of attacks though it's a double edged sword.
Beautiful bra. That's a really good question. It definitely is going to depend on which model you have. So, some of the more robust models, obviously, you, you can't get past them with that kind of thing with a face covering and sunglasses you probably could but with sunglasses alone you might be able to, it's going to depend on each thing. At this point it's all theoretics being tested in a lab I'd like to see more push for it in the future.
I'm not aware of any at the moment, there's a few libraries that I followed for a while now. There's one called adversary adversarial lib. Um, basically what it is, it's a Python library that lets you do really fast adversarial attacks, um, you could probably implement it into a mobile app at this point they're just libraries from what I've seen, oopsies. No, I actually haven't heard of tactics like. The question is IR emitters on hats many cameras have IR filters have you heard of any tactics like that. I've heard of some like that but I've never really seen them used in practice. It would be interesting to see which cameras can filter it out and can't. Here's another really good GitHub link that has a lot of like adversarial robustness testing libraries that can also be used to create apps that will turn around and, you know, create unrecognizable photos and ones that don't work with different algorithms.
How quantified is the resilience of machine learning models. That's a good question. I'm not actually sure, um, so there's a lot of papers that talk about. For example, I have one here that talks about targeted backdoor attacks. A lot of them talk about the different attacks but no way to quantify it any way to work on it in the future and basically assess it in some kind of report of vicious how bad your system is. Yeah, I can do that. These are all links that I'm going to have posted in the blog later.
Thanks very much Corbyn for all these all these insights and demonstrations Is there anything else you'd like to cover before you close for me.
I got asked what my future research plans are and like, what I want to get involved in. Um. That's a really good question. So I'm currently doing soft robotics research with Harvard Lafayette, and University of Vermont. I got involved in one of their research teams, and I've been dragged through it through time and it's really cool, we're generating soft robots and evolving them using a compositional pattern boosting neural network, just over, units of time to create different peaks and different environments. I'm really interested in that kind of field and I really like the intersection of mathematics, physics and computer science, it's something that I enjoy. So I might see where that goes. I'm not really sure. I have yet to apply to actual big colleges and see where I can get in, but I'll see where it takes me, and I'm definitely going to keep updated on Twitter with it.
I'm curious. You've talked a lot about the things you've learned I'm curious about your learning style during all this COVID and stay at home and home learning you know are you are you just constantly a self, you know self study type of person or do you really benefit from classrooms or seminars or interaction with other people.
So it's, in my opinion, it depends on what you're learning, if it's something like machine learning mathematics anything like that. I'm driven about it. So it's really easy for me to learn it on my own. But anything else, I really struggled with when our school switched online, we closed on March 13 and reopened like two weeks later I think specifically with online classes, and I really struggled with that. They suck. I get that the teachers try their best and everything but it's such a hard format to learn, especially things like physics classes because you can't do the labs, you have to do everything online.
Online do some of the stuff you've been doing with Python. With the classwork that you do like is it orthogonal or do you really get that you really get to have the classwork be, you know, sort of the same as the stuff that you've been talking about here.
Can you repeat your question they're
basically asking if when you're doing stuff for school. If it's the same type of thing that you're showing us here or if they're really kind of unrelated at this point.
They're pretty unrelated at this point. My school never really offered. I live in an area where my school doesn't offer anything above, like an intro to Python and how to do some image manipulation. Everything I've been doing outside of school at this point. Yeah,
well we find when you get when you eventually get off to college I think
some of those interests dovetail a little bit more nicely.
yeah I will post all the workbooks I have a specific area on my GitHub for it which I need to reorganize I referenced this earlier, the little blocks that you can throw up on things, so they threw these little blocks up on stop signs, and it absolutely ruined the predictions of their model for this being a stop sign, added lane yield sign like this is not a stop sign. I'm not aware of what the Disney research project is but now I'm going to be aware of it. Yeah. The question was, have you looked into the. Have you looked into the cool work that Disney research has been doing I haven't but I'm going to now. The other this person said, I just can't self motivate sometimes. Ooh boy it gets rough when I have to do it myself. My best advice for you is to build some kind of habits, I found the discipline works a lot better. You know, I'm going to sit down and read and work on this thing for 30 minutes. It sounds really stupid, and childish. But between like I picked up running in the middle of like January I think as well as powerlifting a few other things to try and keep my body and shape and doing those every morning just translated into this grip, that I can keep throughout my day and I can get through hard things like this, it's really nice that way it sounds a little bit weird. There's a lot of psychology research behind it there's stuff like flow by me Hi sue me Hi chick sent behind, which I'll post in the chat now, which I thought was a really good book. And then there's research by Carol Dweck, which is all about grit and deliberate practice a lot of it is pseudo sciency, but it makes intuitive sense, and it seemed to help me so it's worth a shot.
A question come in, wondering about your work with robotics and maybe other sort of cyber physical systems is that something you think myself interested in.
I was very interested in it, I used to build robots when I was a little kid, all the time. I currently have two 3d printers running to me, to the right of me. They're making a bunch of parts right now for a project I'm working on. Um, I wanted to get back into robotics during quarantine. I have a few blog, I think, three or four posts on robotics on my blog, but nothing insanely, you know, intensive. But what I'm working on right now might be really interesting if it works. So definitely going to be in my future.
That's fantastic. There's a, there's overlap, obviously between robotics and machine learning are interested in other aspects of robotics.
I haven't explored it enough to know.
Sorry, I had to post those in the chat, um,
I haven't explored it enough to know I'm really interested in the research that like Boston Dynamics has been doing with their spot robot. But it's also a $75,000 piece of hardware that I would love to get my hands on at some point. I'm sure I could convince a college to buy it for me.
In your next project there are there are teams for the robot soccer and battle bots and stuff like that at universities.
I tried to start a next robotics team at my school but we couldn't get any funding, which was sad. What kind of terms, should we use when searching for related information related to robotics or machine learning, if you're looking for machine learning, I'd suggest keywords like good professors, MIT has a lot of good stuff, you know, MIT OpenCourseWare machine learning. They have a lot of really good courses, Andrew and G machine learning. After that you'll start noticing keywords that will pop up you can look into like backpropagation perceptrons other things like that. And you'll slowly be able to work through. It's a kind of a process of recognizing what terms you need to see to learn about the different areas.
Spot runs 2.48 2.4 BGN for control totally dlF attack vulnerable. That's pretty funny.
I think a really interesting field of machine learning is actually its extension to expanding how humans can see the world. So what that MIT paper I referenced earlier. The adversarial machine learning attacks are a feature, not a bug. It goes into say how we can see from a different set of eyes, what's happening in these mines that we're on, we're creating and a lot of data science stuff focuses on finding trends in data that you can't see yourself, but you can get a machine learning algorithm to see and then output for you. It has some really cool applications to economics that I can find a few papers on if you guys want
not generalized AI at least. The question is, ml is or machine learning is pretty obviously not generalized artificial intelligence at least to me. How do you see your research going toward going towards that or do you even care about generalized AI.
I don't really have interest in generalized AI, it has a lot of ethical and moral implications that I don't really want to deal with. I'm, I'm very interested in machine learning applications to data, which also moves into AI and betterment with people's lives and that kind of thing. But I haven't really touched that at that point.
Corbin I'm curious. You're obviously reading a lot of materials for this, you reread for players you read fiction or new sites anything like that are you like keeping those empty.
Yes I do read for pleasure I have a nice stack of books next to me, and an entire bookshelf, full of fiction. I read a lot of textbooks and stuff. Lately I've been reading. JOHN scalzi, he wrote a book called locked in and then a book called unlocked. They talked about, they, I think they released in 2014. They talk about this world where, you know, a pandemic hit and people were locked in and they kind of act similarly to what's happening right now. I haven't finished it yet, but it seems to be a really good read. I do have some Isaac Asimov somewhere I haven't read much of it I should get around to
some of that conversation we're just talking about generalized AI. It actually flows from some of the likes of asthma, some of the robotics were excited, and some of those questions that you don't. I understand you saying that these aren't a primary focus, you're thinking about, you know, say x or something like that. When you read in fiction, it's sort of seeps in a little bit and it gives you maybe a somewhat broader perspective. So that might be fun for you, a couple some point.
Yeah, it's always fun to explore new perspectives for me. It's something that I've just kind of enjoyed, I really like. I've really liked different opportunities to explore things in philosophy and epics classes, despite being focused heavily in Mathematics and Computer Science and STEM fields. I really like ethics, the ethics class that I took like two semesters ago opened my mind to a lot of stuff, I have the textbook for it somewhere, but you know reading about the ethical philosophies of like, Aristotle Immanuel Kant and a bunch of other people really opened my mind, like how other people see the world and it was really nice.
Yes, being widely read and will serve serve you, your whole life.
I like a quote by Ralph Waldo Emerson, I remember the books I have read no more than the meals I have eaten, but even so they have made them.
Well, we've drawn. We've had a great discussion, one just came in one room if you fall Washington is the Lex Friedman on YouTube.
So I definitely recognize this space, and I think I've watched three or four of his videos before, but I'm not that familiar with him.
Let me look at his channel and see if I've watched it
might be an interesting follow up. Yes, workbook recommendations in the chat as well from scalzi and maybe some other yeah
I'm actively archiving this chat into a random text document.
Yeah of course yeah there's that there's no shortage of things to read or things to program but it sounds like you're, you're keeping. He runs.
He runs the artificial intelligence podcast I listened to that for a few weeks, it's actually really good. I liked it. I would recommend this to people. I'm also add bought a pad of paper insurance.
A lot of book recommendation This is forestry heavy producer, or you're going into the Isaac Asimov world as an avid collector of all of his works and papers. I really recommend his robotics series starting out with iRobot. It's amazing where he takes the three laws of robotics and breaks them every time.
A Ross Sims universal robots also does that, it's really interesting I saw it, I saw a production of it live a few years ago and it's, I think what propelled me into this.
Do you believe.
This person says hi Corbin I studied AI and neural networks about 10 years ago but abandoned A while later and went into infosec. Do you believe it'd be worth getting back into AI and neural networks. It depends on what you're trying to get into. Um, it's definitely going to be a rising field in infosec. I don't do much information security I did red teaming for a time but I've bounced around basically covering mathematical applications and that kind of stuff and information security. I'd recommend getting back into it, if you want to, you know, expand your opportunities with different companies because it's definitely going to become a thing. I hope that the buzzword pneus of it wears off soon though. You know, we have a baseline scanning algorithm that does machine learning to detect attackers in our network like no it doesn't it detects. You know averages changing basically, but there are a lot of applications that have got potential, and I'm excited to see where it goes.
There's a lot of jobs in basically banking, finance fraud detection essentially things because they're looking for patterns of bad actors, and that is across a couple of different domains not just things like network security but also you know patterns of whatever things like astrology, things like what's going on. Internationally with money markets all kinds of crazy stuff. So basically if you're thinking of jobs, you know, which you shouldn't be at this point quarter, but maybe the person asked question is, following the money it's not about that approach.
Yeah, um, fin tech startups are using a lot of AI. I think it was, Harvard yards dome yard. I think that's what it was, was a startup yeah dome yard. It's okay their servers down. That's funny. Um, basically it's a startup that started out of Harvard they're a hedge fund. They did a lot of high frequency trading stuff, and one of my mentors worked with them for a time, and that's how I got into major machine learning topics like weight functions cost functions everything like that because he sat there and he explained to me everything they were doing. And it was, it was an abstracted machine learning model that they just applied to this different application it's really worth learning it if you want to get into these, like, emerging fields potential applications of machine learning on infosec are quite huge your opinion. I don't know enough to talk about where it could be applied. There's some ways they could pop in my head you know detecting attackers on the network. Like CCTV surveillance, anything like that physical security could be a big thing for it. I think that is going to become a big field. But my opinion basically comes with a grain of salt because I'm not that involved in information security is the best way to fight against privacy abuses based on machine learning to overwhelm the system with fraudulent data as opposed to attempting to avoid it, if the system is doing on the fly learning, I'd recommend trying to overload it with fraudulent data first. That's probably your easiest bet. And then if that doesn't seem to work, then try to move on to avoiding it. And it's all dependent on what the system is if it's some robust military system, it might be able to just sanitize out the fraudulent data. But, if it's a robust military system, you might be able to just avoid it. I've got about five minutes left.
Oh yes, ad block machine learning models are really interesting to look into. I'll try to find some papers on them.
Neither of those adversarial systems because the other side is is also learning and changing.
Yeah, like to think about how crazily the field has evolved. I think in, it was like 1997, they considered that AI was done. They were like this is it we achieved everything we can with neural net. And then came out some reinforcement learning stuff and I can't remember the paper I'll try to find it though. I'm writing it down on my pad to put down. It was a paper on deep neural networks that came out and said hey, we made this breakthrough, and the entire field of artificial intelligence just spurred up again in a matter of months, the entire thing. And that can just keep happening at any moment.
I'm gonna have to end soon. If any of you guys want to continue talking to me on My blog should be my email, you can shoot me an email if you want, or my Twitter DMS should be open. I'm always open to questions and I like mentoring people if anyone wants it. I think it's really fun and if anyone's familiar with the Fineman technique, you learn best by teaching someone something. It's really nice.
Well thanks again for I think we're finally ready to wrap it up and it's been a real pleasure. Watching you hearing from you interacting with you and getting all your, your insights and opinions on these things has been fantastic time. And I think we're going to look forward to, maybe seeing you again and hope, or certainly getting your work out there on the air.
Oh yeah, I absolutely love this community. I. You guys are fantastic, and I will always be coming back and submitting talks.
Oh I have something to look forward to you. Thanks again have a super night and enjoy the rest of hope.
Absolutely. You have a great one,
signing off. Bye.