Hello and welcome. I'm Zack Whittaker, the security editor at TechCrunch. It's great to have you here today for this early stage session on what startups need to know about bug bounties. With us today's guest speaker Katie Majoris, who is the founder and CEO of looter security. Thanks so much for being here today. Katie, how are you doing?
I'm doing great, Zach. Thanks for having me.
So for all of you watching today, Katie is one of the early pioneers in vulnerability disclosure, and has helped some of the largest companies and government agencies change how they respond to hackers and security researchers. That work has brought monumental change to how hackers are perceived, and helped to carve out an entire industry to field and fix security vulnerabilities. In 2016, Katie founded consultancy firm Lutheran security to continue that work in helping companies and governments work with hackers to better defend themselves against cyber attacks. So that's why we're so thrilled to have Kate here today to talk about bug bounties and vulnerability disclosure programs, and what startups need to know about them. In the next 20 to 25 minutes, you'll find out what bug bounties are, and crucially what also they're not. Before we dive in, you can also submit your questions through the slider button on your screen. After the presentation, we'll get to as many questions as we can, so feel free to submit them through the slider button. And with that, I'll hand over to Katie.
Thank you so much, Zack. And thanks TechCrunch, early stage for hosting me and this event. So let's just dive right into it. What do startups need to know about bug bounties? And how to run them safely? What they even are and differentiate them from vulnerability disclosure programs and penetration testing. But you know, first a little bit about lootah security. You know, as Zack mentioned, we're just about a five year old company. And I launched this company right after launching the very first bug bounty program of the entire United States government. It was called hack the Pentagon. And as the name implies, we invited hackers to hack the Pentagon. Well, that seems impossible. And certainly from a retired hacker like me, it sounded absolutely impossible that we would be able to get the Pentagon to agree to this, not only has it caught on in the last, you know, half decade or more of introducing these programs, to invite hackers to test your security controls. But you know, it's caught on across different governments and different large organizations. Now, you may be wondering, as a startup, whether or not this fits into your security program, and there are ways in which you can safely do so. But there are also a lot of really important security investments that as a startup, you should not overlook, as you're preparing for this. So as we move into the slides, and hopefully there will be a lot of questions, you will be much more enlightened and educated as to the ways in which you can do this safely, appropriately, or at all. So first and foremost, I mentioned a few different terms, it's important here to get these straight. And I've heard seasoned security professionals use these terms interchangeably, and that is inaccurate. So in order to know what we're talking about, what's the difference between vulnerability disclosure as a process versus penetration testing versus bug bounty programs? Well, vulnerability disclosure is the process by which you hear about vulnerability from the outside, you digest to that vulnerability somehow internally in your organization and figure out what to do with it whether to create a patch, how to prioritize that patch, and then what to release to the public. If anything, you know, if they have to take action, that entire end to end process is governed by two international standards, for which I'm the co author and co editor. If you had asked me as a hacker, you know, a decade and a half ago, that I would be writing ISO standards on volm disclosure, I would have I would have probably laughed you out out of the room at that point. But you know, what it comes down to is that organizations do need guidelines in how to handle these issues appropriately. So that's the process of non disclosure. Next, we've got penetration testing, I was a professional hacker for hire aka penetration tester for a number of years in my over 20 year history in cybersecurity, you know it professionally. And what this is, is basically hiring professional hackers under contract they have a specific set of skills that match your problem set and you pay them they're under a nondisclosure agreement NDA, to keep your vulnerability secret for as long as you need them to perhaps forever and you are at your leisure as to whether or not you fix those vulnerabilities. The reason I bring that up as to you know what the timing is, it matters when it comes to differentiating how you respond to these different vulnerability discovery activities or ways to find out about bugs in your software with security implications. And then finally, we've got bug bounties the topic of this presentation. Bug bounties are simply adding a cash reward to the process of vulnerability disclosure. programs. So you may have heard of bug bounty platform companies that will facilitate running these programs for you. You know, I used to be part of one of these bug bounty platform companies myself. And you know, they do help a little bit, but we will get into the limitations of how far these platforms can help you versus what you are still left to handle on your own. So if that makes sense difference between vulnerability disclosure versus bug bounty is that both of them require a working digestive system for bugs, that means people process and technology to enable this process where bug bounties can often accelerate the the rhythm or the speed by which you hear about bugs, because there's a cash reward tied to it. It can also unfortunately, increase your spam, and people trying to do what I call beg bounty, where they're hoping for a payout. And they will send you literally everything, every scanner result they've ever heard of, and the kitchen sink in an attempt to essentially wear you down and get you to pay them out. So how common are these processes? Honestly, even the top organizations and you see here, there's a stat 88% of the Forbes Global 2000 don't even have a front door where they would welcome vulnerability reports from the public. So if you see something, say something 88% of the Forbes Global 2000 do not make that process easy for you. That being said, let's talk about ISO standards. Now. Luckily, I have hot pink hair, or as my friend was saying, it is called Black to the fuchsia is my hair color. Otherwise, I'm pretty sure you'd be asleep during this part of the presentation. But effectively, these two ISO standards are illustrated in these two columns. So you've got ISO 29 147. Remember that is the external reporting interface and what you output to the world if a bug is reported from the outside. And then you've got the digestive system, which is governed by ISO 30111. And what you can see here in color coding is what do bug bounty platforms take care of for you, right, and that's in yellow, they can take care of the front door element, they can take care of the initial triage, but what they can't do is fix the bugs for you, you've got to decide what relative priority those bugs have internal to your organization. And it's not based on cbss score, for example, it is that's a very poor, it's a very poor metric attempt at gauging risk in a very generalized way. And it's not really going to help you prioritize what's most important to you in the context of your business. So that being said, not everybody is is really ready to implement both of these ISO standards. But if you intend to fix vulnerabilities at all, whether you find them yourself, or somebody from the outside reports them to you, you need at least the digestive system. So you need to follow ISO 30111 no matter what, but not everybody is ready for the trouble that can be brought with ISO 29 147 I keep joking that I should have knuckle tattoos have these these ISO standards on here? Because I swear, you know, they've been part of my life for the better part of the last decade and a half. So you're thinking to yourself, I'm just going to invite these friendly hackers, how bad could it be? Well, I don't know about you. But these, if you can see the animation here, even if they're friendly bunnies, gray, white hat, black hat, what name you know, whatever you got a swarm of bunnies is still pretty scary. So in reality, it's a lot closer to this, you can't tell friend from foe, in terms of your incident response of investigating all of these hacking attempts that are suddenly you know, authorized by you when you start a vulnerability disclosure program or bug bounty program. And if you don't have a strong incident response or investigation, capability, and as a startup, you absolutely will not have these capabilities, let's be real, you're going to have a really hard time telling whether this is a nation state actor coming at you, or a friendly helpful hacker who just wants to bug bounty. data privacy is a huge area that honestly, the Department of Justice took on, took upon itself to help try to define some guidelines. And DOJ released these guidelines for how do you scope what's in scope versus what's out of scope for data privacy? When it comes to setting up policies for volm disclosure programs or bug bounty programs? They essentially said you have to think this through you the organization have to think through what you know, what are the requirements for your obligations to protect users, PII, your own employees personally identifiable information? What regulatory requirements do you have for data breach notifications? Well, if you're thinking that as long as you put some sort of legally safe harbor in your own disclosure program or bug bounty scope, that you'll be fine. Well, I've got to tell you that, you know, as a professional penetration tester, you know, certainly we had we were under professional obligation to destroy any data that We encountered during a pen test and we were under NDA with not just the customer, but also with our employer. So we were, you know, we had procedures and it was expected of us to destroy all that data. Not so much with Svetlana over doing the bug bounties, you know, and submitting reports and and whatnot that may contain PII. There's no general requirement, you know, of that person, the bug bounty hunter out there in the world to actually destroy your data. And then one of the most famous cases of data privacy overreach via a bug bounty was the Uber data breach in 2017, for which I testified before Congress about how they use their bug bounty program to pay off extortionists who had actually downloaded 57 million records. And certainly that would have qualified as a data breach, even though they tried to sort of pay off some hush money and have the have the attackers signed an NDA in order to keep this off of the FTC radar and out of the public's purview. So there are many legal requirements, many data privacy requirements that you have to think through before you begin to open that front door, and ask helpful hackers to help you identify security holes.
So bug bounty platforms, as we said, they can't stop everything. They can certainly help you figure out what spam and what's not spam, they can help you in doing at least following the technical steps to reproduce an issue. But what is the problem? Well, there's a labor market problem not just in bug hunting and finding people with hacking skills, who can who can find the vulnerabilities for you. But this labor layer of doing triage, it's actually the toughest job you'll never love. And I like to tell this story about Microsoft. When I worked at Microsoft and the Microsoft Security Response Center, a popular science article came out that said that Microsoft Security grunt, which is sort of the name for all of us was in the top 10 worst jobs in science, they put us between elephant vasectomy list, and whale feces researcher. So we were right there in the middle, we got t shirts, but it didn't make make our jobs any any better. Ultimately, this is a kind of job that is part of those internal people process and technology. This is the type of job that you can expect someone wanting to sit in that role for a maximum of about 18 months. So you've got a constant churn of training and handoff of these, you know, sort of what we call vulnerability case managers internal to an organization that you're going to have to manage these are different from your outside triage people, outside triage, people are not going to know the relative risk to your organization, the relative business priorities of all the other development work that you're doing. And so this is the internal roll that we're talking about in terms of security, engineering and case management. So that being said, how do you do this more efficiently? If you're a startup, you are trying to put all of your money where you're going to get the most bang for your buck? Well, when it comes to security, you know, you can be told that bug bounties are cheaper and more effective than penetration testing. They might be in certain very, you know, limited circumstances. But we've known for a very long time that actually finding bugs at all, at the end of the software development lifecycle, after the fact is up to 45 times more expensive than if you had invested in building security in from the ground up. And what we mean is start at the beginning of the design phase before you've written a single line of code, that is up to 45 times less expensive, therefore a better ROI than kicking off with a bug bounty program. So where does all the actual operational work take place? You know, the initial triage remember that color coded yellow in the terrible ISO diagrams that were almost putting you asleep? That's the little piece of the work that bug bounty platforms can take care of for you, right? It's a ticketing system with some initial triage, where does the work make the most sense for you to be investing, it's in your layers of doing that prioritization, differentiation. And it's also ultimately pushing it into your security engineering processes. You want to ideally learn from every bug, whether you found it yourself or somebody from the outside told you and never waste a good book never tried to try to never make the same mistake twice. So you should be incorporating these learnings into your secure software development lifecycle, introducing new processes, new skills to your developers and your testers such that you never make the same mistake more than once. Now, we came up with a vulnerability coordination maturity model or vcms. You can download that off of luta security comm slash v CMM. And you know what it is it's a framework of five capability areas that we found in many years of setting these programs up at large company. Flex organizations, they are a key indicator of how ready you are for a VA loan disclosure program or a bug bounty and you can read more about that elsewhere, we can answer some questions. But here's what it comes down to what are our recommendations, do a maturity assessment and gap analysis of your people process and technology, you can use the V CMM as a framework. Using that assessment, then figure out what is from your baseline, what is the most appropriate roadmap to get to where you want to be security goals wise, and take a look at your security investments thus far, it may not make any sense whatsoever for a startup, let's say with no security staff internally to start a bug bounty. And really, unfortunately, I've run into more than one, more than two, I mean, honestly, a huge number of startups have been trying to shortcut or jumpstart their security program and their security engineering by starting on the wrong end of the investment and the investment plane. And starting with bug bounties when they actually should have been investing in their internal security first, again, trying to prevent as many of those vulnerabilities from entering the code base in the first place. And then if they do enter the code base, using the common tools and techniques that are available out in the world to scan for bugs themselves. So not just measuring your ability to fix bugs or how critical the bugs are. But ideally, you want to get to a point where you can actually measure your maturity by not just fewer vulnerabilities, but less severe vulnerabilities that require more complexity in order to find an exploit. That's, that's where you should be aiming for. And if you're a startup, you know absolutely building this from the ground up. And not doing it backwards, as I've seen a lot of organizations do is going to be the most efficient way for you forward. And trust me as a bootstrap startup founder. It's all about efficiency and where you spend your money. So assess your maturity capabilities, resources, close the holes that you find, not just in software, but in your process. If you're missing steps in the security development lifecycle, start inserting those if you're missing training for your developers and tools, start inserting those and then roll out a VDP. So what do we Vaughn disclosure program is for those keeping up with the acronyms. We recommend that you run a VA loan disclosure program for about two years before adding cash rewards. Why is that? Well, you know, I minor did mathematics and you know, to establish a vector, you need two points in a direction. So you need two sets of annual data to really see what are you know, in what areas should you be investing more security resources? And what direction are you going in general, this will actually help you shape where you want to put that bounty. So in the end, you know, even though I've started some of the biggest bug bounties in the world started, Microsoft started the Pentagon's etc. You know, I am a bit of a bug bounty apostate, meaning I will warn organizations and especially startups that haven't made proper investments in security, I will warn them away from bug bounties because they're not ready. And in fact, you know, you may end up spending a lot of time recovering from a terrible bug bounty, and where we see people hit that, you know, sort of can't go on this way any further. You know, pain point is about at 18 months in if they have started a bug bounty or evolve disclosure program without a digestive system without internal resources. That's really when the dog pile starts. So if you're kind of aiming for what are your budget requirements going to be? And what do you need to talk to your next set of investors, your VC, about how you're going to use their money, talk to them about using some of their money to invest in security earlier, rather than going the what can appear to be the cheap and easy route towards bug bounties? out the gate. There is such a thing as too much chocolate, there is such a thing as too much of a good thing. And while you do want to learn about as many bugs as humanly possible, especially when they're security bugs. You don't want to get overwhelmed too early too soon before you've looked through them yourself. So just again, in summary, audit your own systems and software, build a sustainable vulnerability handling process First, measure how it's doing before you start adding cash rewards. The idea here is you are trying to bring balance to the force. There are these concepts of creation, maintenance and destruction. And as we are startup founders, we are creating constantly maintenance has a cost. And certainly we want to avoid some of the destructive behaviors that can happen with unpatched security holes. But more importantly, with systemic security issues that haven't been addressed at the architecture level. Here's some references for you. I'm sure we'll make the slides available via the TechCrunch platform. And with that, I would like to thank you for listening to this presentation and ask Zak if we've got any questions from the audience.
Thanks so much. Katy for that presentation. As a reminder, yes, the q&a is still open, we won't be monitoring the chat. But we will be getting, we'll be getting to as many questions from the q&a as possible. So feel free to submit them using the slider button. We do have some questions in from, from the audience. The very first question is, how often do you need to go through some of these security testing processes? You made a reference to those those ISO standards? Are those every year? Or are they just like a one time thing?
Well, security should be a continuous process for you. But if you are writing code, if that is something that your startup does, certainly you need to be building in a security development lifecycle. So where do you test how often do you test, it really does depend on your method of, you know, software development, agile software development, or continuous delivery, you're going to have sort of different checkpoints that are involved in in that software development process. And you will have natural insertion points for where you should be looking for security holes I mentioned, look for them in the architecture phase, if you don't have those security architects in house, that would be a great time to hire a professional to take a look at your security architecture. Then again, after you've implemented some code, having professional penetration testers, professional security code reviewers, again, if you don't have them in house, which as a start up, you won't, that would be another time to bring in the professionals. And then finally, you know, once your code is released, that would be a great time to you know, have others take a look at it. And ideally, if you've made those investments along the way, adding the newly released code to a vulnerability disclosure program on the outside and receiving those bug reports from hackers on the outside, you will have ideally cleaned up a lot of low hanging fruit ahead of time. And then you can have a continuous you know, sort of crowdsource view of your security going forward after
that. We have another question from the audience. You mentioned in the last few moments of your presentation, a two year recommended gap between a vulnerability disclosure program and a bug bounty? How much should a company look to spend on preparing a bug bounty when they're ready?
It really depends on the attack surface and what you want to find out. So, you know, if we had had time for a longer presentation, I would have gone into some details of how I created Microsoft's first bug bounties because, as you saw from the slide covering, you know, the elephant vasectomy as well, feces researcher problem, we were already getting, you know, the former, we were already getting, you know, somewhere around a quarter million non spam email messages a year in the regular volume disclosure program. So why on earth? You know, would we add cash to that melee that was already happening? Well, the the ways that we were able to do it was we decided, Okay, we want to learn about we're going to learn about these bugs no matter what because, you know, we have researchers who are willing to report them to us, you know, through the regular loan disclosure program. But what we noticed was for the beta versions of certain software, the researchers were holding on to their vulnerability reports and not telling us during the beta period, so we saw the data, you know, with very low bug reporting during the beta period of Internet Explorer, for example. And then as soon as Internet Explorer was was what we call released to manufacturing, there was a giant spike of researchers coming forward. And these folks were supposed to be friendly to us. So what did we do? We did a traffic shaping exercise where I said, if we put a bug bounty at the beginning of the beta period, we don't actually have to spend that much money. But it's about bouncing smarter, not harder. These folks wanted recognition. And before the bug bounty, the only recognition they could get was after the beta period was over, would we then release a Microsoft bulletin, which would thank them with their name in it? You know, if the vulnerability was still present in the final release version? There were no bulletins to be released if you found an issue in the beta version of the software. So we had inadvertently created this situation where friendly researchers were kind of doing us wrong, and we wanted to correct that. So that is one answer to that question. It is a very big, it depends what your attack surface looks like what your targeting looks like whether or not you've put some thought into bouncing smarter, not harder, and turning those researcher eyes towards targets that you're particularly interested in and sweetening the deal with a little cash reward. Overall, we got 18 bullets in class issues, each of which could fetch at least six figures on the offense market, potentially more, you know, and we spent a total of $28,000 for those 18 bullets and class issues at the time. So it really does depend.
Because one more question from the audience. Do you need to hire a CFO or chief information security officer to run a bug bounty program or vulnerability disclosure program.
You know, I would say that you don't need a seaso to run it. If you are an organization that would benefit if that is the size and complexity that would benefit from a CSO at all, you, you probably should have had our own disclosure program already before that point, bug bounty, you know, as I was mentioning, bug bounty is sort of for the advanced case, ideally, where you have sorted out a lot of the low hanging fruit yourself. But ideally, so you know, their purview would be in managing the efficiency of where you're investing your security spending for good security outcomes for your users and for your own corporate environment. I would say, if you have a bug bounty before you have a C, so you are definitely going out of order.
And there's also another question in regards to the presentation, which you mentioned, safe harbor in disclosing bugs, and that companies claim they promise they won't Sue. But what does this What does this really mean? The audience member asks,
well, you know, safe harbor is a really nice legal term that that no matter what you try to apply it to in bug bounties and volm disclosure programs. It's it's not actually what that legal term means Safe Harbor, in legal terms, it and I'm not a lawyer, but I dealt with a lot of lawyers and dealing with these, these programs set up safe harbor and legal terms means, you know, if you follow these provisions, then you are given safe harbor and we're not going to prosecute you. That's effectively what they're what they're saying in these bug bounty terms. However, there's no ability for any organization or any government to obviate the laws of where you live and where you are governed, you know. So for example, if it's illegal to send, you know, private information to a bug bounty program from a certain country, then you have effectively broken the law, even if you're conforming with the Safe Harbor requirements of the recipients, right. So the recipient can't what I call grant hacker, nomadic immunity, they can't get you out of jail of any circumstance that you may run into. And actually, that Uber data breach situation turned out to be quite a legal conundrum, not just for Uber, who, you know, basically got the hackers to sign an NDA and the hackers complied, you know, allegedly, they complied. They destroyed the data. They said they weren't weren't going to talk about it. They if they essentially follow the Safe Harbor that Uber was trying to provide to them in exchange for $100,000. Well, every single district attorney in every single state, after that came to light was not only suing Uber, but those hackers were potentially in legal jeopardy. And in fact, when they tried to pull the same trick with the extortion against a LinkedIn subsidiary, not only were they indicted, but recently, the former CFO of Uber at the time, Joe Sullivan was also indicted by the DOJ for performing that payoff. And essentially, you know, not alerting federal authorities and the FTC and the FBI on the crime that it occurred. And what happened was, you know, essentially, the DOJ argued that, because he got them to sign an NDA, they went on to commit more crime. So this definitely is something where you don't want to mess around with safe harbor as being a catch all that will protect you, that will protect the researchers. There's just too many circumstances here, the best you can do in your sort of legally section is say, if you follow this, we won't push you to pursue legal action against you. But we can't protect you from a third party if they choose to, to pursue legal action against you.
That's a really good answer. Another question from the audience. This seems to be about the internal culture at companies in regards to bug bounties on vulnerability disclosure, how do you convince team members to care and to play their part?
convincing team members to care and play their part, you know, what was funny was at Microsoft 2010, was when they asked me to look into the matter of bug bounties. 2013 was when Microsoft first launched its very first bug bounties. It was an organization that internally they were doing, what they believed was what was required of them in terms of dealing with vulnerability reports. So the heads of the major, you know, major flagship products, were saying, look, we're you know, we incorporate them, we incorporate the STL we incorporate fixing vulnerabilities when they're reported to us, what more do you want from us? And why should we start paying these people because they're already disrupting our work. And what I did was I did a data exercise. So this is one approach that may or may not work for you basically took all of the data for all of the vulnerabilities in the most interesting flagship products of Microsoft, including current version and two versions back. So that would be windows current plus and minus one and minus two versions, Internet Explorer and office, the entire office suite. And I calculated based on how many critical or important severity, you know, issues, so the most severe issues? How much money would it have cost Microsoft to pay those out in bug bounties? Now, at the time when I did this calculation back in 2010, the highest bug bounty being paid by anybody was being paid by Google. And that was around 3100. What is it 31337 dot $7. So 3107, seven, somewhere in there, but it it spelled elite, you know, and leet speak with 70 cents at the end, right? So it was still a four figure payout under five grand. So what I did was I calculated based on five grand per bug bounty that met this criteria, and I gave them a number, and I said it would cost you know, if we had paid for all those in bounty, this would have cost $1.8 million total. It was never the money. When I gave that traffic shaping exercise example to the head of IE, that was the perfect alignment of IE wants to deal with these in the most efficient way possible. It was never about the money. It was always about how do we fit this into a complimentary process that that minimizes disruption to our existing development cycle.
We have another question. Plenty of questions. If you still have questions, then feel free to submit through the slider button. When it comes to hardware startups. Are there any other standards or considerations that hardware startups need to think about?
Well, certainly when it comes to vuln disclosure programs and bug bounties, when you have hardware components, or very expensive components, you know, for example, that would need that you would want to be tested, you do have to figure out how are you going to get those items to the people who are testing? And then you know, potentially, how do you get them back if you want them back? You know, there are a number of different ways to try and handle this. But you know, certainly things like hardware, if you are especially concerned about it, they might not make a great match for the VDP or bug bounty construct. One, there aren't as many hardware hackers out there, as you know, let's say web application hackers, that is the largest, you know, sort of pool of security researchers is those who can find vulnerabilities in web applications, and a huge number of tool suites that are available out there. Hardware stuff, you usually need a specialist. And there are some companies you know that will that will say that, you know, we will crowdsource for you write a bug bounty with vetted individuals with vetted skills, vetted certifications, etc, even ones with clearance if that's what you need. But again, that's essentially just a talent agency for freelance penetration testers, or at least they should be penetration testers because with those skills, it makes no sense for those types of folks with those skills to gamble their time and skills on a bug bounty. Right, I'm gonna say that one more time, people with highly refined and sought after skill sets in security, especially in finding vulnerabilities in esoteric chipsets or hardware or whatnot. Chances are, they are more gainfully employed by having you know, full time jobs or penetration testing contracts. And I would actually recommend that if hardware is your concern, that you start with specialized vetted professional pen testers, as opposed to looking to this crowdsource model, because we're just not there yet. In terms of a huge compendium of skilled folks in that area.
We still have a couple of minutes left. And so I want to get through these last last questions. Another one from the audience. When vetting vendors to work with you? What's the top questions to ask to make sure that they have the correct security in place on that and so that, that startup doesn't take on the risk?
Right, so we we hear this a lot, you know, in terms of supply chain concerns, you know, and what what technology are you bringing into your, your technology stack to be to have to support? You know, some people like to ask the question, do you have a voland disclosure program or a bug bounty program as one of the sort of tests of maturity, but as hopefully you learned in this presentation, just having a front door and a way to report says nothing about the maturity of your ongoing process, or whether or not the bugs that you have are sort of pedestrian level bugs. So things that I would ask another vendor things like, you know, tell me about your security, development life. cycle, how do you address vulnerabilities that you find internally versus ones that you are that are reported to you? What's your mean time to repair for vulnerabilities, and then working out support contracts with those third party vendors that address the issue of if we find security vulnerabilities or something is reported to us from the outside? What is the SLA that we can that we can expect from you in servicing these vulnerabilities? I think those are great questions to ask other vendors.
So we have time for just one more question. Do you have any examples of companies doing bug bounties or vulnerability disclosure programs really right, or really wrong?
You know, it's a matter of, of looking at organizations at a particular moment in time, we emphasize a lot of assessing maturity and gap analysis, I would I would say that, from what I've heard publicly, and through the grapevine, you know, Microsoft had a Camelot phase, when it was actually doing this stuff really, really well. And we seem to have slipped out of that phase. So, you know, I used to say, Microsoft does this pretty well, and everything. But you know, with recent reports on the ground, honestly, it seems like they are missing the boat, and perhaps they've changed something about their triage process. But they've been closing cases down with full proof of concept from reputable researchers. And that's been going on for, you know, the better part of a year at least. So while I would love to say there's there's a shining example out there who's, who's doing it? Well, I think you need to look at organizations at certain points in time.
Okay, so we maybe have just one more question left, with the time that we have, what's your thoughts of using crowdsource solutions, such as book rounds and other companies out there that have these platforms that help companies and startups field hacker outreach, essentially, versus running them in house?
Well, it really depends on your risk model and your risk tolerance. So number one, this is another vendor that you're adding to your technology stack, but in a very particular way, right. It is a SaaS platform, that they are themselves still considered startups, even though they've been around for I think, what, seven, eight years at this point. But, you know, I think hacker one had a breach that they called not a breach, you know, and that was where an attacker could actually access multiple back end inboxes. You know, so we're dealing with the fact that, that not all, not all security problems are safe to outsource to third parties. You've also got to understand that Remember, the worst job, you know, insecurity being that that initial triage and the triage support. That's typically staffed by contractors on the bug bounty platforms who are also themselves bug bounty hunters. And we've seen some double dipping, some cross dipping going on, you know, in that space. So I would say that, you know, for your web applications that are available to the outside, you may get some benefit and derivative benefit from having sort of that front door spam filter of the bug bounty platforms. If you have more sensitive data, more sensitive materials, I would strongly consider looking at managing it in house because ultimately, what those bug bounty platforms give you is a structured ticketing system, which you can accomplish on your own or through other means. There's also something new from the federal government space called Vince, that Vince was created and deployed last year, just just around the pandemic timeframe, beginning of the pandemic and Vince was built, you know, basically for multi party coordination of which we've heard so much. So, I would say you know, your mileage may vary, but look at it in terms of risk tolerance, who do you want? Having access to your vulnerabilities and your vulnerability reports before you've had a chance to fix it? That might give you your answer.