Weaknesses in Security Testing
11:54PM Jul 27, 2020
Our next speaker has over 20 years of experience in software engineering and security best practices and is passionate about improving the state of cybersecurity at the earliest stages of software development. We present Bryce Williams with weaknesses in security testing.
Welcome everyone to weaknesses in security testing.
This talk is about application level security testing and weaknesses and test automation. My name is Bryce Williams, and I'm the cybersecurity practice lead for a technology consulting firm called sis logic We're 20 years of experience in software development in cybersecurity. And I lead a team of security consultants who performs penetration testing of commercial products, applications, IoT IoT devices, as well as trains thousands of developers around the world.
I'm based here in New York City.
And I've really enjoyed attending hope over the years and I'm excited to be speaking at this year's online event. I'm going to be using two different main information sources for this talk. One is from the data from the pen testing that CES logic has performed over the years, we perform a variety of product and hardware device assessments that involve detailed review of really everything that the client will give us source code documentation, configuration, virtual physical environments. Over the last decade, we've evaluated many different types of products, several different industry verticals, many millions of lines of code, thousands of security weaknesses reported. The other main area of information that this talk derives from is the open web application security project or Oh wasp If you're not familiar with OWASP, it's a global organization with a lot of great resources. Some of those are listed here that I've derived guidance and just general information from, for example, the application security verification standard. I'll talk more about this one, mobile application security verification, standard web security testing guide, the mobile security testing guide, and the firmware security testing methodology. If you're not familiar with these, I highly recommend that you check them out just to give you more information about what goes into building software securely, how to test them to look for security weaknesses, and even leverage that information from more of an attack and exploit point of view.
asvs, or application security verification standard is currently at version 4.0. It consists of 288 granular security requirements, and has two main purposes to help organizations develop and maintain secure applications and to allow Security Service vendor security tools vendors and consumers to align their requirements and offerings. It contains a very interesting statement in it. automated tools and online scans are unable to complete more than half of the asvs. Without human assistance. I won't go into detail about this chart on the lower half of the slide here. But there are three different levels. Level One is designed to be penetration testable levels two and three, require access to documentation, source code configuration, and the people involved in the development process. So those are the types of requirements in those levels that are generally not something that automated tools can detect. So certainly quite a number of requirements in this excellent resource, that tools aren't going to be able to assist with it as easily. My team and I did a review of the pen tests that we performed over the last decade or so, and realized after going through all the data that on average, almost half of the vulnerabilities we discovered were found through manual analysis meaning The others were either found by automated tools, or at least the tools provided enough of a direction for us to dig in and find that particular weakness. So that's a large number of findings. And you know, your mileage may vary, but that's what we saw. We also noticed that our manual findings had a higher average severity. We also noticed that more manual findings were an IoT device assessments and with more secure systems, which makes sense, those systems that are more mature have had multiple pen tests performed against them. Likely at that point, we're only finding elements through manual means. So what does this number mean is 46%. Maybe my team didn't use the best tools or could have configured them better. But like many teams, we're constantly updating our toolset and looking for better ways to automate what we do. My guess is that other teams have we have a similar percentage, whether that's 30% or it's 50%. I think a good amount of findings as a result of a pen test or a Particular security assessment are those that can only be found via manual means. This next slide looks at pen test finding categories, this charts based on vulnerability types per pen test. So you can see on the left hand side, we have more third party based findings, as opposed to access control, for example, on the far right, that's to be expected, a lot of modern software uses a lot of open source. So that open source has published security vulnerabilities in them. So a good number of the findings likely fall into the third party category. security tools tend to have a lot more findings and categories that they do well in and less findings and categories that they struggle with. The same is true for pen test engagements, where you have a lot of team members, different types of tools and manual analysis that's performed, but certainly to a lesser degree. So this is interesting data in the sense that it shows based on our data, what our breakdown looks like, by category. This helps us understand where we need to focus either having better tools or better processes and techniques for finding weaknesses and areas where we don't see as much. Certainly at the end of the day, the applications that we review that simply may not have as many weaknesses in certain areas than they do and others,
but you can at least see
what we were able to produce.
In this next section, I want to talk about different types of security testing tools.
First, we have static application security testing, or SAS. This is the analysis of computer software without actually executing programs. It includes source code binaries, mobile APK, IPA files, container images, firmware, images, etc. So it's not just source code analysis. Although we often think of SAS as being source code analysis, it could be more than that. It can also be as simple as grep like tools to powerful solutions using dataflow analysis and machine learning.
Some common SAS tools,
CPP check find security bugs, es lint sonar cube. These all have open source versions that are a great way to get involved in the SAS testing area. Commercial tools in this space can be rather pricey, which is why many use open source tools.
Next, we have software composition analysis or SCA.
SCA tools examine software to determine the origins of all components within the software. They're effective at identifying open source libraries with published vulnerabilities so they don't look at open source library code for example and find weaknesses necessarily that haven't already been discovered. Their whole purpose is to create an asset inventory of all of your open source libraries, and identify if there are any known published vulnerabilities for those library versions to help you make the appropriate steps around upgrading or applying a patch.
Some common tools we see in this space and Include
Let's look at some areas where tools do well.
First, we have memory safety, insecure string and integer handling in aslr DDP these are all things that a good static analysis tool can help with, especially in the C and c++ space, looking for memory related concerns in the code, insecure string handling use of binary protections as a result of compiler flags to opt into operating system protections like aslr and DDP. All things that very easy for a good static analysis tool
to do and to do well,
identifying open source software and their published vulnerabilities. This is where your SCA, your software composition analysis tool is really good at creating an inventory of the libraries that are being used, or letting you know if there are any published vulnerabilities in those versions, even highlighting if there's license concerns based on policies that your organization might have. Common web issues like x SS CSRF, clickjacking, missing security headers, these all fall into the realm where a das tool is really useful and can detect these sorts of items very easily right away even with just some passive scanning in many cases.
Then we have injection weaknesses,
SQL XML LDAP OS there's many different type of injection weaknesses. You can find these through a variety of means static analysis tool can often find these based on the particular language adapted is also quite useful at exercising and finding injection weaknesses if it's, you know, web application. But the tool that does this best isn't the is tool in that it can not only do what the dast and SAS tools do, but it can also find where maybe the development team has done something a little out of the ordinary, they've used a unique framework or some custom layers to their process, so that it would be more difficult for a SAS or das tool necessarily to see this issue. But if you you know, especially if you were do some manual kind of poking and prodding on this from the outside, that's where the is tool would likely be able to detect that an actual injection issue did occur or is possible, notify you of it. And there are certainly many more areas where tools can do well just wanted to highlight a few here. Here are some common misunderstandings about security tools. I've seen a number of organizations that are too reliant on tooling for their security, health, and many of these things come up at times. For example, that one good security tool can do it all. I wish this were true, but it almost never is. in any environment. Many teams more mature teams have multiple tools, a variety of tools in their arsenal.
That higher cost means better results.
Certainly higher cost tools can be worth the price tag. But that doesn't mean that just because you pay a lot of money for a tool that it's going to meet your needs or meet all of your needs. Oftentimes, open source tools can be just as good in the right hands, that they can be used by those with little security experience. Generally, tools do require a level of experience either with using the tool itself, or with cybersecurity related concerns, or maybe software development best practices.
As you always want zero false positives.
This can be a little contentious at times, but I like to see at least a few false positives, you know, a minimal set but at least a few kind of depending on the engagement because those often will clue you into Something that could be a little suspicious, things that require investigation. So if you have no false positives, likely you're excluding even suspicious related areas of the code or the system, where if you dug into it a little deeper than you might find something,
the tools can replace people.
This certainly is not true, we will always have people involved in security testing. Tools just help to make their job a little easier.
And the results are
generally understood by software engineers. Some tools do a better job of this than others. But often, the results that come from a security tool needs some level of translation, they need some way of you know, turning it into a language that developers can take and make actionable changes to the system in order to correct
So in this next section, let's
look at some examples where tools fall short.
First, we have poor architecture choices on this image On the left, you can see that lower balcony has no way to get to it. So whatever the purpose was behind this, it was at least not a very good design. automated tools generally have a hard time getting enough context to point out design flaws. It's very often that we see a general lack of reusable security protections, meaning if you can't find session management weaknesses in a particular device, you might find them in another device from the same manufacturer.
One example that we've seen many times
is that applications use custom implementations of identity and authentication services instead of leveraging a more secure third party solution. We almost always recommend that software engineers take advantage of something that's well vetted, provides a good authentication solution rather than building their own.
insecure direct object reference or I adore I adore weaknesses are a type
of access control vulnerability that arises when an application uses user supplied input to access objects without verification. For example, in a multi tenant systems where the user is not verified to ensure they have authorization to access the requested tenant ID. So in this URL at the bottom here, you can see the ID parameter takes an integer value. What happens if the attacker changes that number two, a different value that the current user shouldn't have access to. This is something where security tools, they can fuzz that ID and tell you if anything bad happen anything out of the ordinary, you know, an error occurs. But they can't really tell you if the data that you're getting should be returned or not. This is often overlooked by software engineers, because they assume an attacker would not be able to determine the ID of another object or another tenant, something the user shouldn't have access to.
administrative access control,
how does the security tool know what is admin level functionality? Should a user be able to view details of other users in the system that's one example of What could be administrative access in one system but not in another. So it can be difficult for tools to really help here, they often provide limited help in determining if authorization checks were correctly implemented. We've seen custom authorization implementations generally have weaknesses if you poke at them deep enough. For example, in a poor implementation will often see a scenario where a user can upgrade their role or permission simply by, you know, changing a parameter or supplying a field where you know, maybe the UI isn't passing it to an API. But if you supply it, you could change your role to an administrator. And there's no authorization check to prevent that from happening. A tool might be able to help with this, but they generally don't. So it's kind of up to you as you're evaluating a system to see if something like this as possible. Next, I want to look at an access control example. We did a particular pen test where the system had a fairly complex authorization model. And we noticed in our testing that there were some discrepancies between the user accounts and the roles that they were assigned to. And the ultimately the permissions that they had. In this case, in this system, the permissions were called features. We noticed that regardless of what user we were testing with, we had access to the same sort of permissions and things in the system. So we went through and did what we call a detailed authorization or access control review, where we looked at all of the API methods that you could call as a user, or the application would call on your behalf. And then all of the roles and the underlying features that were defined in the code and map them all out in a spreadsheet. And we highlighted two interesting things. One, these lines in yellow, which I don't expect you to be able to read this, it's rather small. The lines in yellow were features that included unexpected API access. So what that means is that the feature had a name like miscellaneous or connected equipment, which isn't very instructive about what it should allow the user to Do so that when you're assigning these features to roles, it's not entirely clear what it is that you're assigning that role to be able to do. More concerning were the ones that we highlighted in pink here. And these were features that provided some sort of administrative access, where again, it wasn't clear based on the name of the feature that it was providing any sort of administrative access, you know, it's not clearly named admin, or like user administration. In this case, again, we had feature names like miscellaneous or connected equipment.
So it was it was somewhat confusing,
not only from a naming perspective, but we also saw that some of the API methods didn't have any features at all assigned to them, meaning that all the users had access to them. And they were administrative functionality. This is a good exercise to do many systems, especially those that are more mature over time, they've developed this bit of complexity to their authorization models, and with complexity comes confusion and ultimately the possibility to create innovation. scenarios especially where end users of the system don't understand what they're assigning to users or you have some permissions effectively that slip through the cracks.
misuse of cryptography
tools can highlight lack of encryption use of insecure or weak ciphers and misconfiguration issues, but they're not so good at is determining whether you're using the wrong tool for the job like reversible encryption to store user account passwords. Instead of using like a password based key derivation function, which is preferred. tools are also poor at telling you when encryption keys are not being stored securely, or being reused between environments or devices. We see this quite often in systems.
Let's look at an example of poor cryptography. Here we have two pen tests of the same system.
Tools technically should be able to
help with this although most that I've seen cannot. You should also make sure that the time taken The user response message is uniform. If there's a big difference in time between user accounts supplied that exists versus those that don't, an attacker can use that information to determine which accounts exist in the system. This is another area where tools should be able to help with but generally we don't see this in tools. You should use a side channel to communicate the method to reset their password. This will be very difficult for a tool to detect and tell if it was done correctly. Simply because the tool doesn't have that context. It can't tell that an email message is being sent out or an SMS message is being sent out. can't look at those tokens necessarily, to see that they're being supplied, that they're stored securely on the back end, that they're single use and expire at an appropriate period. All of the things that involve forgot password recovery. Most of the things around being able to recover your password is something that security tooling is going to have difficulty in telling you if it was implemented. securely are not.
There are many things that can go into robust application level auditing, most of which tools cannot help with. For example, application audits all relevant user actions, it would be difficult for a tool to be able to tell you if you have enough logging or that you're logging the most important areas of your applications use that the time clock is properly synchronized. Make sure you've got accurate timestamps on each of your audit entries, that the application uses a global unhandled exception handler. This is something that a tool might be able to help with depending on the you know the programming language and the context.
But generally, I don't see this in tools
that each audit event includes a user ID or username or something of that nature. Also, that each audit event includes client IP, anything around determining if audit events are robust enough. They include the data that you might need later. For incident response work, it's something that tools generally aren't going to help with.
The audit logs are adequately protected from
tampering. This is another area where tools generally cannot help you.
Mass parameter assignment is
a very interesting sort of attack. This is an area where I think tools could help more with, but they generally don't. And so it's often something that you can look for in systems to see if this is a possibility. The way it works is some software frameworks allow developers to automatically bind HTTP request parameters into program code variables or objects to make things easier for the developers. So in the example, below, we've got a HTML form, we can see three different values, user ID, password and email that are submitted via that form. And then this class, we see this user class is on the back. This is what is receiving that data that's inputted by the user. And you'll notice there's an additional property here, this is added Certain web frameworks will automatically bind the is admin property or really any property if it's supplied by the front end by the HTML form. In this case, it handles wiring that up automatically, which is convenient, but also can open up the door to security issues like in this case, if an attacker knew or they simply guessed at that particular property might exist, they could supply and is admin equals true and potentially change the user's role from a general user to an administrator and those elevate their privileges. business logic attacks, tools can help replicate bots and automation attacks, but they need you to guide them in what activity is considered unacceptable if done too quickly or out of sequence. This is another interesting area we see in applications not quite as frequently, but definitely an area where tools are not going to be able to help much. In this particular example below. It's called a time of check time of use attack. And here we have an idea To device that can be updated via a USB or SD card, which contains a digitally signed firmware upgrade. So a user can supply that to upgrade the device. And in this particular workflow, the attacker supplies a valid firmware file via the USB or SD card, inserts it into the device, the device then validates that the signature is correct, and then prompts the user. So at this point it pauses prompts the user to say, hey, do you want to continue? We've you know validated the firmware image, it's good to go. Normally, what would happen is the user would choose Yes, it would go ahead and install that firmware image and everything would be fine. But the attacker at this point can remove the SD card and replace it with their own containing a firmware image of their own that they have not signed. And in this case, because the validation check was already done, the process thinks everything's fine. So when the user clicks ok It continues, and the device is now running bad firmware. This is simply a workflow in which if done out of sequence or you know, the developers aren't thinking through scenarios in which an attacker could change things or do things unexpectedly, you could result in an insecure state like what we see here. Secure configuration, a lot of application security is controlled through configuration settings outside of the code. Many tools overlook these settings. Some can certainly help. But you often see issues especially with like custom configuration settings, which in turn control how security is handled throughout the system. security tools might be able to help with common security configuration and can highlight those but anywhere where you got anything that's done in a more custom basis, that's where it's going to be difficult for a security tool to help you because all they can do like in a static analysis scenario, is look at the code and say, well, in some cases, it looks like it would be set five We often see the same back end keys or passwords using both tests in production environments. And this, of course, allows attackers to take information that they learned from a test environment and apply it to a production environment if they have that sort of access. In this configuration snippet in the lower right, we see some common configuration settings like this, redirect to HTTPS to set defaults when it really should be true or valid SSL certificate set defaults when it really should be true. Often these are set this way for like development environments, where they turn off security in order to ensure functionality is working. And then they might forget to turn these settings on in a production environment. or there might be certain conditions where the settings don't apply. And so that's where things can, you know, get into trouble. As a result, it's very important to pay attention to security, or just configuration settings in general and make sure that security is not being turned off bypass or downgrade As a result of those settings,
often we will see that software engineers will add security protections to a particular application or product that are a little abnormal, a little unusual and as a result, often have weaknesses. And those weaknesses sometimes are not something that a security tool can find, but certainly something that you can find with a little bit of effort. Often, we see that developers might get notified of a finding because of a security tool, but then they'll make a change to address that just enough so that the security tool likes what it sees. And as a result, they can continue on with their day. But they haven't really addressed the underlying problem. They just put a bandaid on it that a skilled attacker can find another way to exploit. So in this particular example, this was a very interesting one. It was an IoT device. It had a web interface and no JS back end and This may be difficult to read here, but on the left hand side, there's a verify password function. And they have put in some protections to prevent OS command injection. If you look closely here you can see if the password length is greater than 16 characters, it will give a message that password length greater than 16. Rejecting. They also look for certain special characters. And if they detect them, it will print out OS injection character rejecting. Ultimately, the password and salt value that's passed to this function is passed along to the open SSL pass WD command line in order to create a value effectively hash the password so it can then compare it to what's stored in the underlying user account store. What we recognized after looking at this code was that certainly this is good enough to prevent most like best tools from finding this sort of issue. But not good enough if you're really clever and can work through the things That you have specific limitations, but they haven't closed off everything. So for example, you can still use the backtick character. So as long as you can send commands that are 16 characters or less, you could still potentially take advantage of running your own OS commands. So effectively, that's what we did. We we sent in took us a bit to figure this out, but we sent it a series of password submissions. In other words, this was a login page, and you could give it a value as the password that it would then ultimately pass to open SSL via the command line. So effectively, we wanted to terminate that command and run our own command, we passed in a series of just short characters, and built up little script files. We needed an ampersand character, but that was in their exclude list. So ultimately, we had to build our own little script that would convert exclamation point characters to our ampersand. Then we built another bash script here that would open a reverse shell. And so ultimately, we ended up being able to Get this to execute via remote Netcat listener had remote shell access to this device as root all just by taking advantage of a fairly simple weakness in a login page where they already had protections in place to prevent against OS command injection. This is just a good example of areas that we see in systems where there are some protections in place sometimes just enough to get past the results seen by security tools, but not good enough to prevent an attack from someone who's skilled enough to understand how they can manipulate the system better. In this next section, I want to look at some suggested ways to improve. First off, we have better security testing. If you perform manual security testing, or maybe you participate in bug bounty programs, I recommend that you focus on areas where tools don't do as well. This will give you more opportunity because you'll be able to focus on things that tools will overlook some of that low hanging fruit that they're good at You'll be able to get the more difficult items or the things that you know tools just aren't good at finding. If you build security testing tools, add features or new tools to address gaps covered in this talk. All tools have room for improvement.
Next, we have cybersecurity training.
Training for software engineers is my number one recommendation for improving cybersecurity health. This slide has three different methods just to highlight different styles of training. They all have their pros and cons. So for example, instructor led or classroom style training, has great personal interaction. It's the most widely used type of training, maybe more difficult to scale. This is the type of training that CES logic provides to software engineers, both on site and remote and you know, with a current pandemic, everything's being done remote. So, this is totally understandable to be able to provide this remotely and be able to do so in a way where attendees are able to retain knowledge easily.
Next We have recorded or elearning
scales, well, you can take it at your own pace, because you're just, you know, leveraging some pre recorded content. You of course, have lower interaction at this point. We have hands on labs, these are practical, they're well liked generally good retention, because you know, you're getting in there, you're doing something. And as a result, you retain that knowledge much better. These often require longer on site classes, not always, but certainly if you're working with IoT devices and attendees need to have those devices, then they can't provide those as easily.
On the right hand side here, I wanted to highlight
some various services, some commercial offerings out there that provide great hands on style training, either writing code and understanding how to do it more securely, or looking at vulnerable code and identifying weaknesses in that software. There's just some great resources to check out if you want to see what's available for providing hands on training gamification, CTF style, Who your software engineers want to highlight is a lost juice shop. This is a free, vulnerable web application that's designed to teach you how to discover weaknesses and systems. And it does a great job of providing a modern web application that not only has weaknesses in it, but does a great job of telling you kind of giving you awards when you find those weaknesses. And it can be used more in a CTF style environment, or you have a team that are competing against each other.
unit and integration testing.
I wish that security focus unit and integration tests were more common. Unfortunately, they're not. But this is a great resource for software developers to take advantage of. So whenever we'll take advantage of type of so whenever type of vulnerabilities discovered in code, developers should consider writing automated tests that continually look for other instances of this vulnerability. This is really powerful with custom frameworks that a static analysis tool would not be aware of, or have necessary context. So you can build automated security tests to leverage security tools in a more targeted fashion that often provides that much needed context. This is something that I recommend to development teams that have they taken care of the basics, they need to move up to that next level. They've run security tools, they've, you know, address all of the findings on an automated basis and their ci CD
process. The next step that I
recommend, you know, get those security unit tests those security integration tests in the system so that you look for things that are specific to your software to your frameworks that are being used.
Next, we have application penetration testing. And I want to
highlight that this does differ from your traditional pen testing that looks at you know, an organization's network environments or their Wi Fi networks. One of the main goals of course is trying to get domain admin credentials in that case, with an application pen test or a product pen test. You generally are looking at the holistic view of the system, ideally. So there's generally some research and discovery that occurs, understanding how the system works, looking at, you know, architectural documentation, looking at threat models or producing threat models of the system, then there's some automation involved. That's where you run your tools, scan for various weaknesses. You also want to look at source code, make sure that's thoroughly reviewed. And of course, manual analysis that can take a variety of forms, depending on the system, but just, you know, poking and prodding at the system to try and understand where there might be weaknesses. And then at the end, providing some remediation options. It doesn't work as well if you don't provide information that is actionable by the software developers, if they can't address the weaknesses that you're identifying. So take some time, understand what they need to change what they need to do and provide some pointers, some guidance or as explicit as you can be in terms of Make this configuration setting or change this code to this or use this other third party library to provide this functionality. The more you can give, you know targeted guidance, the more likely it is that developers are going to be able to address that issue, or at least address it more quickly. And it's also important, of course, to sit down with them and talk about the results, answer any questions that they might have, so that they have a full understanding about the weaknesses that were discovered, and how they can address those so that the next time that you look at the system, assuming there is a next time on an ideally there is that you can see that those past vulnerabilities have been addressed, and they've been addressed well.
So I spoke
about that from the standpoint of someone that's involved in penetration testing. But if you're on the other side of the fence, and you're working with pen test firms, take a look at what their processes and and see Do they have these various steps or you know, how detailed are they being in their pen testing. Maybe you don't need this level, you know, you're wanting to just Do a cursory level review of this system but just know that the generally the more information that you can provide, and the more that they can spend time going through various components of the system, the better results that you're going to get.
bug bounty programs. For those
not familiar with bug bounty programs. They're ideal that's offered by many organizations by which individuals can receive recognition and compensation for reporting security bugs. They work best for organizations that have achieved a more mature level of security practice. So I wouldn't recommend these for the organization that is really just getting started down their security path. You know, if you have trouble patching your servers with security updates, then a bug bounty program is not for you not yet.
bounty testers can almost always find weaknesses that tools would overlook. So make sure that you've run the tools first to identify issues. When you feel pretty comfortable that you found everything that tools can find. Then you can hand it over to bug bounty testers who are likely going to take it to that next level, and hopefully, find issues that are deeper require more time more expertise in particular areas. In conclusion, I want to leave you with three points. One that most applications and IoT devices have security flaws, this will almost likely be the case for some time to come. Number two, the security tools have plenty of room for growth. And number three, if you're in the business of exploiting weaknesses and systems, take a look at areas where tools tend to struggle with this bitly link here on the left hand side will get you a copy of this presentation. I just want to thank everyone for attending my talk today, and I look
forward to your questions.
We'd like to present Bryce Williams live and direct. This is weaknesses in security testing. And now you've seen the presentation, we'd like to invite our audience, those of those of you in the matrix chat room, you're welcome to ask your questions to Bryce. Just type them out in the channel. And we will relay them here is the session q&a channel on our matrix server.
Thanks everyone that were able was able to listen in I really appreciate the opportunity to share some of my experience and my team's experience. And if you have any questions, I'd be great happy to answer them.
This was a obviously a fun experience for me. It's always interesting doing you know, an online recorded version as opposed to a live event.
It's interesting that we're presenting to more people than we could fit in the ballroom at the hotel Pennsylvania. That's cool. And
if any of you in the in the audience have questions, Please present them.
Yeah, no, my team as this is something that we talk about on a daily basis, this idea of, you know, testing software for security weaknesses using tools, liking tools, hating tools, trying every technique possible, saying Why is there a tool that does this? Can we write it? Sometimes we can, sometimes we can't. So this will obviously be an area of continued improvement over time. So I'm, I'm not so pessimistic that I think that this is, you know, that this is an area where we will never see improvements, and it just tools are terrible tools. Our tools are very good and very useful. It's just that it's interesting when you think about what can a tool do versus what it can't do and where you have to augment that. And so as a result that, you know, it's, it's good for us that as humans that make a living off of this. We do have
one question from from the audience.
an audience member says we get better Rep with devs they feel like what they're doing is pointless or don't see the reason for it? What recommendations Can you give security practitioners to get devs to consider and involve security from the beginning? No one
always is challenging to help the development side of the house, understand your motivations. You know, you want to you want them to be able to feel like you're trying to help them. So I think the more you can communicate towards the end goal of Hey, we just, we all want to make the system more secure, and acknowledge that you may not understand what it takes for them to fix a particular problem. So you want to learn you want to understand what are their challenges, because they're balancing a lot of priorities, deadlines, usability performance, we have these conversations all the time. It's helpful that, you know, I have a software development background, most of my team members also do so we can kind of relate and understand that, hey, we'll give you a recommendation. But we know that there's a lot of caveat to that. So here's maybe a couple of other options or come back to us. Let's let's talk about it in more detail. Or maybe, maybe we'll point them to someone else within the organization that has done this successfully, so we'll say go talk with them, because they have likely a good solution or something you can start with as a reference point. Do
you think someone asks a very short question DMZ? Yes or no? pros? If yes, because of No, yeah,
I'm all for DMZ. S, I mean, there's a lot more to that answer, I think based on the the context of the system, but yeah, DMZ s are generally a good thing. I mean, your your, your basic cybersecurity concepts are almost always going to be true. You know, regardless of the system. For example, the things that we test for and look for, like in a modern cloud hosted containerized orchestrated system is in many ways, somewhat similar to things we look at an embedded IoT system. Obviously, the technology is different, but the core concepts are the same. You still have authentication, you still have authorization, you still have to protect secrets and so on.
Great question, though. Indeed.
Just for the sake of those unfamiliar, how would you How would you explain the the idea of a DMZ, a
DMZ or demilitarized zone, it's essentially a kind of a sandbox area where you are on your perimeter, usually with like a web application or something that you're exposing to a hostile environment like the internet. And the idea is that your traffic hits some service or server in your DMZ, which has more of a hard it's more of a hard environment has limited attack surface has more security controls involved. And then you're more sensitive systems, maybe systems that haven't had the rigor, or have security controls in place are on the back end. So your DMZ acts as that buffer between your the front end where you're receiving requests and your back end. So systems like a web application firewall is something you generally would see in like a DMZ whether it's, you know, a dedicated DMZ you set up or it's kind of implied based on your cloud provider. But that's the type of system that's nice and some of the benefits of having Something like that is maybe you any, you know, items that are in your DMZ or like a lab, for example, can be controlled by your production team or your infrastructure team, which might not be your software development team. And sometimes their benefits there, you can make changes more quickly. It doesn't have to go through the same change control, you know, less people involved. So there, there's certainly pros and cons with it, but I definitely see benefits in having those sort of perimeter defenses, including use of a DMZ. Okay, we've got a few
minutes left. One more question. Do you ever get pushback on recommendations because something is, quote too obscure to be used because of the use of older technology stacks
all the time, we get pushed back. And you know, some folks we work with are great, others, I swear, I can give an you know, response to them. And we do this thing called retesting, where they make a change or they come back with like an explanation and we kind of just verify it as an outside you know, third party, and there are times where we've gone back and forth five times. I'm like they are totally not getting it. First, I think it's a problem. Maybe I'm not understanding the sort of solutions. So I asked for more information. But yeah, quite often, not all the time. But there's a couple cases where we get some significant pushback. And in those cases, we want to make sure that we take the time to back up our reasoning, so provide, you know, sources to guidance, that's not just us, for example, recommend a wasp or recommend, you know, resources that NIST or whatever to say, Hey, don't take our word for it, look at what this particular standard is or this best practice. That way they're not maybe, maybe they're having a hard time hearing it from us, and we can direct them to someone else that maybe it just works better or find some advocate for us within the organization. Sometimes it's just a you know, personality differences. There's, as you can imagine, and as
we wrap this up, where where should interested people go if they want to find out more about what you do and get in touch.
You can certainly feel free to email me my emails in the link there or hit up on twitter at Bryce x, be happy to, you know, provide you with more information or just chat. And obviously, the OS resources and other things I mentioned are just good in general for learning more about, you know, what do software developers need to do when it comes to hardening security systems. And just having that information helps you not only as a software developer, but understand how to, you know, provide better information to them, how to look for weaknesses in systems that developers likely are overlooking, and everything around that.
We have time for one more very quick question. Have you tried or scenes security analysis tools results being managed directly by devs? I have.
Yeah. And there are some very good tools, especially those that like integrate right within like a CI CD pipeline, they can get those results. So say if the team is using a ticketing system, like JIRA, for example, if you've got a system that's fully automated and comes back right in their ticketing system, maybe we just like a security flag or tag on it. Those tend to work really well. They can't find everything. But the more it's integrated and the developers are able to work within their environment, use their tools, their IDs, I think the better it goes a long way towards catching things early, you know, at least that low hanging fruit, take care of it before it gets out into production or is found by a pen test, etc.
Excellent. And with that, I think we'll have to wrap up. We're just about out of time. This has been weaknesses, insecurity testing, Bryce Williams, thank you very much for for joining us. Great. Thank you, everyone.