We met with Scott Gerlach from StackHawk and Darren Richardson from Eficode to talk about API Security testing tools. We discuss how to help developers embrace security at the right level, how front-end and back-end developers should approach negotiation and contracts for APIs, and what requirements should companies look for from an API security testing tool.

Scott (00:10):

Ok, we’re starting an AppSec program at a brand new company. We have nothing, and we’re starting. The first thing I would do is go to the like the chief product officer or the chief financial officer and go “Where are we driving value and revenue from?” And figure out what things engineers are working on that delivers that, and go sit with those engineering teams and just participate in their scrums like sit in on their daily scrums, their planning meetings and listen to the problems that they’re trying to solve, and how they are solving them, and how they’re thinking about them for a month. I would maybe say ‘hello’ and stuff and not be creepy, but I wouldn’t say anything else in their scrums and just tell them “Hey, I’m here to observe. I want to understand what you guys are working on.

Lauri (00:55)

Hello and welcome to DevOps Sauna. The DEVOPS Conference is coming again on March 8th and 9th and you are invited to join our event. To build the excitement we have invited some exciting people to join our podcast and share a bit of backstory to the themes we will be covering in the event. This time, we have Scott Gerlach and Darren Richardson. Scott is a long-term Security Practitioner and Chief Security Officer and Co-Founder of StackHawk. Darren Richardson is a Cloud Security Architect at Eficode. Scott and Darren discussed about the API security testing tools - how to help developers and brace security at the right level, how front and backend developers should approach negotiation and contracts for APIs and what requirements should companies look for from an API security testing tool. Let’s tune in to the conversation.

Lauri (01:53)

Thank you for taking the time Scott and Darren, and welcome both to DevOps Sauna podcast.

Scott (01:59)

Yeah, thank you for having me.

Darren (02:01)

Thank you.

Lauri (02:02)

We are in preparations for The DEVOPS Conference and those people who haven’t been into The DEVOPS Conference before, let’s just put out some facts from the previous time we had in March 2021. We had a little more that ten thousand people who registered and concurrently, we had almost 3,000 people joining at the best hour of the 2 days, and almost 7,000 people joining over that period of two days. So it’s going to be a pact agenda and there's going to be a lot of interesting talks. And today, we are going to talk about security which is coming up as a theme in The DEVOPS Conference. More specifically today, we are going to talk about API security and API security testing tools. Why don’t we jump right into the first question and a conversation around how to teach developers application security in an effective way?

Lauri (02:58)

So basically there, from the backstory, security tools are typically built for security teams and written in a security person’s language. But maybe security is not every developer’s bread and butter but they have to learn security as part of their profession. So how to teach application security for developers in such a way that they don’t have to learn everything from scratch and all the twist and turns in tidbit. So Scott, maybe we start with that.

Scott (03:28)

Sure, it sounds good. You know I think this is a funny conversation because we always talk about this in a way that doesn't reign true to me sometimes where we say things like, “Developers need to learn application security and developers need to understand”, you know it seems like, and then we don’t do that to ourselves. We don’t go “Security people need to learn how to code and how to write intricate application software and you know, those kinds of things, so it’s weird that we do that - to me it’s weird. I like to think about it as a “We need to empower software developers to know - to be able to know it's making security mistakes”.

Scott (04:04)

They don’t have to be experts in application security. They don’t have to be experts in pen testing, they have to be experts in writing software, but part of that is: Can we give them the tools and information for them to make decisions quickly? And that is maybe obfuscating a really difficult topic which is application security and turning that into easy-to-consume information so that people can make decisions. If you've ever seen me talk, you’ve seen me do this like, weird slide where I go: “The executive team wants to change the pricing of a product”. They don’t go, “Hey FPNA team, we wanna change the skew price of this thing”. The FPNA Team doesn’t turn around and go “Cool, let me teach you about Excel.” Now Excel has a lot of power and tooling in it and you need to know that it was written by Mocrosoft, like they don’t do that.

Scott (04:52)

They go, “Here’s the spreadsheet. Change the topline number and you can see what falls out the bottom”. Alright so, they’re giving them tools and information to be able to make decisions, and I think we should kind of as the security industry to think about that and how we can empower those engineers to make active decisions about application security in a much simpler fashion that doesn’t require, you know, hair pinning on one security professional organization or using some crazy tool that you have spend a month learning how to actually install, setup and run and then, consume results. Instead, making that much much easier to consume. I’d love to hear Darren’s opinion on the same topic here.

Darren (05:32)

Yeah, I fully agree. Like, the questions based around teaching a security person’s language and I think it really need to be treated like a language, like, as you say, we need to empower the developers to be able to make these decisions and understand these situations and I would perhaps approach that by trying to find the kind of translators or interpreters within your teams already. So, in my experience there's always a lot of overlap with both security and development.

Darren (06:02)

There are security-educated developers and there is security personnel with development experience. And, finding these people and kind of elevating them to the place where they can support the developers as they need to do these security tasks. And then, making them the leaders of the development-driven tool sets required for security is, in my opinion, the best way to progress, to basically raise the development to where it needs to be, security-wise, by giving them the responsibility for it and setting the paragons inside their teams to handle that.

Lauri (06:44)

What are your experiences in developers' willingness and openness to approach the security topics when opportunity is given to them?

Scott (06:54)

That’s a great question. It’s super loaded with what’s the dynamics of the organization and how has security influenced or affected development teams previously but generally, I think developers are willing to learn anything that helps them do their job faster and better. And their job is to deliver value to customers right? Right application software that delivers, quickly delivers value to customers in the form of features and functionality or bug fixes or security stuff, depending on what they’re trying to do. Where you get a lot of resistance in my experiences when you try to teach them a different job. So when you try to go, “Hey I know your job is to deliver features to customers but I also wanna teach you how to do my job as an application security person”.

Scott (07:39)

Then they start getting like that, that squinty eyes like, “Hmmm, this is gonna go sideways” kinda look on their face. But at the same time, like some of the more effective security training I’ve ever done and I say effective now and then at the end you’re gonna be like, “That’s not effective at all”, has been like finding stuff that bug bounties or pen testers or application security professionals find grabbing that dev team and then taking them through the journey of an AppSec Pro, I found this and then I poke at this and the I poked at this and you can see how I’ve been then use that to exploit the service. And usually in that, you know, developers are a class of people that are mostly curious, I think. Uh, generally they’re very curious about learning things and understanding new perspective. Whether or not they portray that when they speak is a different thing. But, when you put them in that situation, generally, they're like: “Wow this is really cool”.

Scott (08:32)

The problem is it’s not a core piece of the function that they perform in the organization so they are looking at all the work that an AppSec person is doing here and they're going, “Okay, let’s get to the punch line here, how do I fix this”. And then we expect them to go do all that pre-work that an AppSec person would do, to figure out where the problems are to then fix. And that’s where you get the like, “I don’t have time for this”. Right you know, “I don’t have time to go put in all these work and learn all these tools and you know, I have a job and I get paid and incentivize by how I deliver software and my performances measured on how I deliver quickly and quality a software to the organization and then ultimately to the customers so you know, back to that same question: How do we, how do we get them that information so that we don’t have to put in a bunch of work so they can make decisions?

Lauri (09:22)

Any thoughts Darren on developers, general developers openness to that area?

Darren (09:25)

Yeah, I think the key aspect here is speed. In my opinion developers don’t want to be chained down by security processes which will slow them down. It is exactly like Scott says, they want to be able to deliver faster and higher quality. As soon as you insert a security tool which starts slowing them down. That, in my opinion is where you’ll find the resistance you’re talking about and I think these security tools going forward are going tough to take into account because if they want to be useful they have be as kind of invisible and streamline as possible to make sure the impact on the development cycle is as minimal as possible.

Lauri (10:15)

We could talk a little bit about the requirements for effective, maybe not only the application security testing but more specifically about API testing. ‘Cause I imagine that, that’s increasingly the case where, “Okay there is this concept to a full stack developer”, and then there are people who’ll say like, “Okay full stack developers really don’t exist. You either are back-end developer or front-end developer” but then, so, whichever way you take it, you are going to be faced with the API testing nevertheless. Either from the perspective of developing an effective backend or developing a frontend to which does the job. So, let’s take a little deeper dive on the requirements for effective API testing.

Scott (10:53)

Yeah so, I like the mythical unicorn full stack developer, they just don’t exist. But if you do find one, squeeze them for their tears. Anyway, I think the requirements for the effective API testing aren’t different. In my mind an application security testing if you would think about them and how it should work today. APIs are in fact applications. And when you think about how you test APIs, I like to think about that as how you develop APIs. So those two things should go hand and hand. They already do, for all the other kinds, well, a lot of other kinds of testing and just generally not only security testing. So if you think about this, the unit test, integration test and function test - all of that happened at like the, if you’re doing microservices, most of that happened at the microservice layer and then at some integration layer, right?

Scott (11:43)

And it’s almost, it’s never, push this whole thing to production, let’s test the entire app API behind the API gateway, and hopefully nothing goes wrong. But we’re testing small bits of code as we’re developing it, in pipeline, in local development environments and I think the application security testing or the API security testing should be the same. You should be able to go, “I can run unit tests, linting rules unit tests, integration tests on my local machine- I should also be able to run security tests on my local machine. Not that I have to do it every time, and CI/CD should back it up. But if my CI/CD does linting, unit test, integration test and then we sneak in the security test that I can’t do locally, like, I as a developer, I can’t run this thing locally.

Scott (12:30)

The only way I can make work is in CI/CD- you’ve created this like, weird paradigm that people then their next step is like, “Okay how do I get around this thing? Like, how do I get this turned off so that it stops me because I can’t self-service. And I think that’s really important for API security testing and then being able to decouple away from run ends when you’re dealing with APIs, lot of security professionals think about testing APIs as, “I have to instrument the front-end to be able to effectively test the backend”, that’s a way - you can do it I suppose. But that doesn’t guarantee that a threat actor works that way. That a threat actor doesn’t have to use your front-end API, or front-end web app to access the API if it has front-end at all. Using staging databases, so seed databases to make small data sets so that you’re not iterating the entirety of production data sets.

Scott (13:21)

I like to call that the pants problem so, you think about an online store that sells pants and you’re tryin’ to test that API and there’s like five routes total on the API and there’s like, “Give me the list of pants. Give me the size for that pants and how many pants do I have in stock”. If I have thirty thousand pants, I’m gonna test like 30,000 times, five different APIs. If you do that with seed databases it makes it so much easier, right? It’s the same functionality and it’s the same information. You can do that much smarter in API testing. And then obviously, speed is key here because you wanna be able to let developers do what they need to do quickly, but being able to do that fast is key. And being able to do that on smaller bits of code, and smaller pieces of functionality, and all those little like, how you develop stuff today for efficiency - if you can test that way, and speed is part of that key then the whole thing is much much better. You deliver higher quality software faster and you don’t get a bunch of re-work because of things that get discovered later in the life cycle.

Lauri (14:26)

Hmm, there was really something that I was… it really spoke to me when I was looking at some of the characteristics of API testing tool which was the provide a curl command that caused that error so you run the testing and then you basically, you find something and it says, “Okay here’s a curl command that caused it”.

Scott (14:51)

Sure. The other thing is, you can take that. I think that curl command, like we talked about a lot at StackHawk, is being able to go all and in alert-fired. How do I recreate the same thing that the scanner did so I can go in the debug and restart. But, it’s also fairly simple to turn that into a regression test then, right? So I find this problem, I should be able to turn that into a regression test. I should never find this. I should never introduce this problem again. If I do, the security test should back me up but I should be able to turn that into a pretty simple regression test based on what the curl command looks like and what it’s doing.

Lauri (15:26)

Yeah. Darren, the thoughts on the requirements for effective API testing?

Darren (15:33)

I think again it’s primarily along the lines of speed. If we think of the developers as one thing, to have the result of their scans as soon as possible and not to be slowed down then obviously speed and responsiveness so that the results are easy to understand and to use highest priority. And going into a bit deeper, I probably say that the way to do that is to prioritize the common issues. Obviously, there are these OWASP top 10 kinds of problems which comprise 95% of security issues, how high the percentage is. And we can scan for every issue all day long if we want but all that’s going to do is introduce unnecessary speed bumps. So if you want to integrate API testing effectively into every pull request, you want it to be as quick as possible and you don’t want to let yourself get bobbed down by all the trappings of things that probably won’t affect you in the long road anyway.

Lauri (16:37)

Any thoughts on that Scott?

Scott (16:38)

Yeah I mean, obviously I agree that speed is key here. Sometimes I think about the OWASP top 10 as being dangerous. Because a lot of people go, “Are we testing for OWASP top 10?”. There’s more than 10, right? There’s more than 10 things that can affect your application, your API, that was just a community that got together and go: “These are the most important things in the top 10, that are affecting applications and APIs today and they change” and, guess why they change because things start at like 20 and become more important and then they stuff in the top 10 and moved down, you know? Sometimes I feel like all OWASP is doing a great job of helping educate the market about what kind of problems exist in applications and maybe doing a little disservice by going: “These are the 10 you should pay attention to”. So you know, I agree that speed is really key, I agree that having some kind of like, understanding of top 10 is important but that doesn’t mean that something that’s number 11 can't like, be the root of a huge problem in your application. You know what I mean? So I have a love-hate relationship with the Top 10.

Darren (17:44)

Yeah, I think it’s more about having a framework of prioritization. It doesn’t necessarily need to be a top 10 but you need to be able to determine what you’re looking for, and which one’s the most important in the OWASP Top 10. Yeah, it does have its downsides, but it does give us that framework as well.

Scott (18:07)

The way the OWASP Top 10 started was really, let’s put educational material out in the market so people can start learning about application security issues, and just general awareness of what those things are because it was kind of a new space, right? We got Jeremiah and our staff out there doing crazy stuff with web applications, and everyone’s like “Wait! You can do “what” to things?” And so the Top 10 was really good like here’s 10 things that you shouldn’t be worried about. Many, many more people, including CEOs are more aware of application security issues today. Like I said, I have a love-hate relationship with this Top 10 because people get so focused on the 10, and they forget about like there’s more than just 10, right? There’s a lot!

Lauri (18:54)

Is there a way to look at that from the perspective that “Okay, let’s try to be incredibly efficient on the Top 10 so as to get more time to work on the rest.”

Scott (19:04)

Yeah, I think so. I mean, if you think about a couple of other really important ones in the OWASP API Top 10 which is like broken access control and function authorization. Like those two things are the ones that get you in the most trouble in APIs, but they're not the only thing that happens in API security. But if you understand, this is one of my big beefs with how we do application security. Broken access control is called tendency filtering for most developers, like how do I keep one tenant from another tenant’s data, and in application security land, in security personnel land, we call it something completely different, so now there’s got to be this translation layer and all kinds of stuff. And instead being able to go “Hey! If you’re customer A, you shouldn’t be able to see customer B’s data? And you should write tests for that.” You’re just taking care of basically, I think it’s number one. It’s either number one or number two in the Top 10. Now, you’ve got a lot of time to focus on other stuff that’s in there. Including injection and data working agreements stuff with the front end, if there’s a front end. You know, lots of other things that tendency filtering bit, it’s kind of the right ones used many. As long as you get it right that one time and use many, you’re in pretty good shape with most of your APIs.

Lauri (20:24)

You mentioned the difference between tenants, but there’s also a difference between the teams. In one way, yes, you don’t want misappropriation of the data by one customer for the other customer's data, but also it would be nice to have a way for front and backend teams to somehow be able to communicate in a shared manner and have the same vocabulary for the same things. Let’s start with the question, is that a problem? And if so, how should teams go about establishing a vocabulary the way of communicating and maybe codifying it as a contract?

Scott (21:05)

Darren, I’m curious what your thoughts are here. Like I have a whole rant about working agreements here, but I’m curious what Darren’s thoughts are.

Darren (21:13)

I’m not really sure on this one. That’s some kind of, I think depending on the culture of the company, that's always going to be communication issues between the back end and the front end, between the back end and the security, between security and the front end. There’s always going to be this kind of locking of horns in my opinion where the responsibility will ultimately fall on the back end to support the data that needs to be handled, and the API as it stands. But other than that, I’m not really sure how to approach the communication issues there aside from as I mentioned before trying to set up these. It’s like you say, there’s the tendency side and there’s the authorization side. These are like the same things but from a different angle. And only by having people in the team who understand that this is the same point being made. In different words, these are the kind of people who have the knowledge for security and the knowledge for development in the team. That’s the best way I would see to approach solving this problem. It’s again the case of language and making sure there are people in the working unit who understand both of them.

Scott (22:35)

Yeah, totally. Agile brought along this whole idea and concept of working agreements. How agile teams interact with each other, and when they communicate, and how they communicate. I think how you’re handling data can be included in that. I think it’s even easier when you’re dealing with rest APIs, graphQL to some extent so it’s a little bit, but maybe not so, I don’t know. If you’re dealing with so, I feel sorry for you, but also good on you because XML is fun. Anyway, when you’re talking about API specifically in the back-end team that’s handling the API data, the great thing is you can codify all this stuff in an open API spec. You can communicate with the front-end team and anyone else who’s gonna consume this API in the open API spec.

Scott (23:27)

Like you can go “Hey, I’m going to return you this kind of data, and it’s going to be encoded this way and you will have to decode it to get it to a normal state. When you decode it, you should safe decode it or you know, whatever that’s gonna be because you have the ability to codify exactly how you are going to communicate in that API and make it standard across many front end teams, many API teams who think about AWS and how their APIs are documented. For better or worse, they are documented pretty well in what they will return, in what they will lieu. That’s because when Bezos came down from the mountain with the ten commandments of thou shalt API, all the things in amazon, he was kind of foreseeing this. What’s the working agreement between all these desperate teams?

Scott (24:20)

They’re working on infrastructure, and services, and inventory, and all the stuff to be able to quickly iterate and develop, and deliver platforms to be able to sell everything and then turn that into what is now AWS. So being able to not have to have a meeting between the infrastructure team that does instances and the networking team that does connectivity, but instead documenting that all in APIs so that both teams can go “I know exactly how to spin up a VPC when I start an easy two instance.” I was wondering what they were called before they were called AWS. When you spin up an instance, how do I attach it to a VPC that’s all codified in APIs? And it’s such a great way to be a very standard, prescriptive way to communicate. Here’s how I’m handling data, and here’s what you're gonna have to do with it. I think,

Lauri (25:12)

Darren.

Darren (25:14)

Yeah, I agree, and we’re starting to even see that across the other cloud platforms as well, but instead language becomes kind of common. It’s actually kind of an interesting occurrence because I think, as you say, that the language had by these massive cloud systems that were all now being kind of herded towards is what’s going to make the communication possible in the future. It’s kind of becoming pervasive that that will be the language we use, so there’s some good sides to that as well as the obvious negatives.

Lauri (25:48)

It’s very interesting to listen to the conversation because we started with the word “contract”. Now after five minutes or seven minutes, we figured out that “Okay there is a contract”, but it’s not the contract in the sense that we as humans usually perceive contracts. It’s a way for us to say, “Okay here is how we put it down in a technical sense so we don’t need a contract because it’s the solution, that it’s the agreement in the system that establishes the contract for all of us, and I can see that human communication is extremely ineffective and prone to errors as a transmitting mechanism, and codifying it some other ways are much more unambiguous.

Scott (26:25)

The downfall here is like if you don’t have an open API spec or introspection courier that defines all of this stuff then it’s still ambiguous. However, there’s a ton of like internal efficiencies that you gain by doing that as you're developing software. I’ve had a lot of conversations with these people who have said, you know we don’t have an open API spec for this rest API. I always ask them a couple of questions. How do you onboard new engineers to that service? And how do other teams in the organization interact with that service? And there’s usually two answers. One is it’s in a word doc, and I’m like ‘what?’ Or the other is, they read the source code. And both of those, one of them is crazy. I don’t know who’s keeping up a word document about software that you’re writing in some other language. But the other one is like we threw all the source code to figure out how to interact with my API that’s so inefficient. So a little bit of work up-front not only makes interacting with that service much easier, but also onboarding new engineers so that you can go faster. It makes that easier as well because they can also read the open API spec and go “Oh, I get what this is doing. Now, I can find that place in the code and iterate on it or make it better or add functionality that doesn’t exist.

Darren (27:43)

I think this kinda brings us again back around to the speed because the reason these open API specs, this level of documentation doesn’t exist is often because I don’t want to feel constrained. They want to code, they don’t want to be writing about the code. It’s kind of a self-fulfilling prophecy that just kind of goes around.

Scott (28:03)

Yeah. Sort of. I mean most frameworks have the ability to, with annotation stuff automatically create open API specs and so it’s just understanding how you use it and then using it. I don’t necessarily think it’s any slower than what you’re writing as code, anyway. It’s just the awareness of “Can I do it?” It’s usually the biggest hurdle.

Lauri (28:26)

It’s Lauri again. Building quality right into your software development is a necessity. To learn how it works and how to get there, we’ve recently released a new Continuous Quality Assurance guide that will give you the foundation and understanding around the area. Whether you work in management, development or elsewhere, this guide talks to you about test automation, test design, test metrics, test environment, test data, and the future of continuous quality assurance. You can find the link to the guide in the show notes. Now, let’s get back to our show.

Lauri (29:01)

I think we are approaching the big question which is in order to ship secure applications, what should the primary targets for application security testing be?

Scott (29:13)

Primary targets for application security testing. When I start a new security program, I must take a risk-based approach to this so what is driving value and customer value in the company and start there. Obviously, you could start with the internal employee directory or e-page or whatever that is, but we’re probably not gonna get a whole lot of bang for your buck in trying to tackle that but if you start taking a risk-based approach. And the other thing I tell, the other thing I coach security people on my team is if you have not worked with an engineering team before, go sit in on their scrums, go sit in on their meetings and just listen and be a participant slash team member to them. So if I was gonna go “Ok, we’re starting an AppSec program at a brand new company. We have nothing, and we’re starting.

Scott (30:06)

The first thing I would do is go to the like the chief product officer or the chief financial officer and go “Where are we driving value and revenue from?” And figure out what things engineers are working on that delivers that, and go sit with those engineering teams and just participate in their scrums like sit in on their daily scrums, their planning meetings and listen to the problems that they’re trying to solve, and how they are solving them, and how they’re thinking about them for a month. I would maybe say ‘hello’ and stuff and not be creepy, but I wouldn’t say anything else in their scrums and just tell them “Hey, I’m here to observe. I want to understand what you guys are working on.” And then after a month start working with some of the team leads and go “Hey, I think there’s a way we can introduce some security testing or some security process.” Threat modeling, chaos engineering, whatever it is. Software composition analysis, static code analysis, dynamic code analysis. There’s a hundred places to start.

Scott (31:01)

It depends on what the thing is and why it’s valuable. I always tell people starting with SCA and DAST is a really good place to start because both of those are two different pieces of information that you action in two different ways and give you good coverage on code basis that you’re working with, but start introducing stuff slowly and go “I think this will work.” and then iterate with that team because if it doesn’t work for the team, it’s not gonna work. You can’t just go “Here’s the tool. I put it in your pipeline. I gotta go to lunch/vacation.” Or whatever it is, right? Iterate and then figure out how that works and build champions on the engineering team that are like “Hey, that security team over there came of us and they turned into a really cool partner, and they’re not telling us no and they’re not stopping us from doing stuff. They’re helping us go faster.”

Scott (31:48)

You’d be amazed at how quickly that spreads like wildfire in the engineering team. As they’re talking and just collaborating. Like at lunch “What are you guys working on?” “Well, we got this security person who’s in our scrums, and he or she is actually awesome.” Hopefully, you can drive people to come and ask you for help. That’s the best-case scenario but making sure that things are working for that engineering team and then getting into a baseline status where you’re like “Okay, we think we have everything covered. If new stuff pops up we can make decisions about it and then talk about it later.”

Scott (32:19)

And I think that part is important, too. Make decisions, talk about it later whereas usually that is flipped, let's talk about it and then make decisions that is the antithesis of speed. Especially when there’s one to five AppSec people that you have to do the talking with and I think this comes back to the core tenets of security people’s job, trust and verify that we’ve seemed to have lost a long time ago. Like trust and verify, but I don’t trust anyone so I can’t. But trust and verify is super important in this process of let people make decisions. Review those decisions and if you disagree, go have a conversation instead of I'm the only one that can make decisions.

Lauri (32:29)

Cool. Darren.

Darren (33:01)

Yeah. I think this is kind of a difficult question to answer if we’re looking about the primary target of app testing to ship secure applications without really seeing the application in question. At least for me, without the application, without the threat model, it’s difficult to kind of understand the discussion but first I agree with what Scott was saying about the fighting these champions inside the application teams. I think so many things we’ve discussed so far have all come back to communication and ensuring that these security enthusiasts in the development teams is always going to make communication easier. But from the side of light shipping secure application, I can only really talk about different kinds of perspectives in theoretical actual priorities of what I would be looking for in these kinds of tests.

Darren (33:57)

It’s actually quite interesting because thanks to the EU that's changed somewhat over the last four years in the wake of GDPR so before the priority might have been, for example, ensuring time to prove systems but now obviously, the priority has shifted towards ensuring that sensitive data is not in any way available. So we’re kind of in a process where the priority is being taken towards ensuring data leaks are not possible. This kind of sensitive data, like names, credit card details, phone numbers, the kind of things that causes problems with the GDPR. And I’ll be quite curious because I don’t know if there are current regulations in the US at the moment Scott, but maybe you can comment on that.

Scott (34:48)

Yeah, it’s kind of regional at this point. You’ve got stuff like California law that dictates some of the stuff and Colorado law that dictates some of the stuff. It’s just not at the for the United States at the federal level. Ah, it doesn’t raise up to the federal level or even the union level like GDPR does today but it’s getting there, right. States are starting to pick this off as their constituents become affected by these things and complain to their representatives and then start writing law. And hopefully, people aren’t blazing a new trail on this new law as they’re writing them and making the same decisions based on consulting with people, what works for law and what doesn't.

Scott (35:29)

I’m assuming at some point we’re gonna see that the federal level in the states here because it’s gonna be so unwieldy for the companies to like comply with California law, and comply with Oregon law, and comply with Virginia law, and comply with Maine law, and comply with North Dakota law. You know what I mean? It’s just gonna become this crazy mismatched confluence of all kinds of law that you have to do and, I hate that I’m saying this, some kind of consolidated federal law. It could be better. How’s that? It could be better than this spirit; little laws along the states could also be much much worse. But, let’s hope that’s not what happens.

Lauri (36:07)

There are two observations that I made, throughout the two of your conversations, and the first one was that you both advocated sort of the approach the first called vertical. So basically, pick a team and go deep into the team or pick a subset of the company software and go deep with that in the team and once you get it, then go horizontal. So, adapt the practices or adapt the scope with a team whatever that team purposes and once you get that right then you have learned something and then you can start bringing in adjacent teams and you can use practice communities or other kinds of ways to proliferate that information. And I am really glad to hear that considering that we always say that from software development most of that is culture and tools then serve the culture whichever way you want to put it.

Lauri (36:59)

The other, this is not the observation, but this is maybe some of the devices for those teams who need to make a decision so my background is in behavioral economics and so I just wanted to share with something again from cross-discipline. So, Scott said that first to make a decision and then discuss. There are two devices that those teams can try and apply to see if they have made the right decision. So one is, make a decision and try to live with it and if you cannot then you know that you have made the wrong decision. That happens automatically in your approach where you make your decision then you start implementing it in software and then you figure out that this really doesn't seem to work, like we have made a wrong decision.

Scott (37:42)

Yeah, and I think an important caveat to make a decision is to make a conscious decision and not an unconscious decision, like if you are making decisions about stuff you are not considering that’s unconscious decision. That’s probably not what you want in that scenario because you’re gonna be reliving a lot of those unconscious decisions. However, if you are making conscious decisions about things that you know or things that you can make, you know, gut risk, acceptance risk checks, we do that all the time with, like MVP, like “What parts of this thing do we have to have? What parts of this thing do we not have to have for the first cut of it to see if customers like it?” Those are all conscious decisions and I love why you said there. Like if you are making a conscious decision and unconsciously deliberating it, come back and do it. 100 percent that’s the Agile process like, you iterate. That the thing we decided on worked? No? Cool, let's change it and iterate.

Scott (38:40)

Sometimes it’s naturally hard for security professionals to participate in this agile mentality, security pros tend to think in absolute right that if there’s one thing broken it's all broken. If there’s one unpatched server it's all unpatched. Like the whole thing is at risk because there’s one thing... you know what I mean? Like this whole... it's either all correct or all wrong, is a really bad mentality and mindset and a lot of security pros, it takes a while to figure that out to like understand that there’s a business that started and we put a bunch of money into it and that was the very first risk that we took - that someone took. Like putting money into a business and betting that people would pay money for service. That was the very first risk we took and you are not even considering when you are thinking about risk as a security pro. You are not even considering that, you’re just like “Cool” the business exists it’s never gonna not exist it never didn’t exist. So that absolute, the absolute value of risk is something that its hard to like learn how to mold and shape into an organization based on risk tolerance.

Darren (39:49)

I do agree with you here Scott but, I think also it’s a little bit… a little bit outdated to say security people are focused more toward the absolute. I think over the last five years we have started to see a considerable change from the same two way kind of leaning in find a way to say yes and I think yeah… that’s very to important to be able to not be the guy staying in the corner just saying no to everything that everyone asks cause then that’ll, exactly as you saying though, that'll kind of find a way to circumvent you or they’ll find a way to like siloed you or exclude you so it's vital to be able to keep that approached of “Yes”, and leading up, building up and securing to give them that yes.

Scott (40:37)

Yeah totally! I was just speaking absolutes to myself like, we have a tough time with this. It’s definitely changing. People are seeing the right hand of the wall like, I have to be a consultant to the organization about risk and not just be the person who goes, “There’s no… We cannot have any risk…” because it falls on deaf ears. So there’s definitely been a market shift in how security teams and security professionals are talking about and discussing risks and I loved what you said there about being the “yes” person. I’m a huge fan of improv comedy and one of the key tenets of improv comedy is keep it going, like somebody says something and you say yes and... And keep the conversation going and the bit and the skit role like… it shouldn’t end. Same thing applies to how a business should run is “yes and”, “we wanna do something that’s risky” “yes and?” Here’s how we can manage that risk or take a big risk and minimize it down the road or those kinds of things… and the worst thing you can do is go “We want big risk” “No” you just killed the skit man; our improv troupe is now on the floor going “That’s not how this works”

Lauri (41:55)

Yeah, we had a Rust language…program language training the other day in the office and we were having a conversation like, “Why C as a programming language allows you so effectively shoot yourself in the foot. As opposed to like, Rust making it harder deliberately.” And one of the points of view in that conversation was we have to look back for the fundamentals of the language from where it was created. When we think about the value of a CPU’s cycle back in the times C programming language was incredibly more expensive to do one more CPU cycle, so it was cheaper for everyone to teach a software developer and engineer to do the right thing and just tell them like, “Don’t shoot yourself in the foot, It’s a bad thing to do but here is how you do it”. Now, world has gone on and the cost of the CPU cycle has basically collapsed, which I think it's an understatement for the value, which allows us to create programming languages that makes it harder to shoot yourself in the foot because it's not, that’s not the purpose. Maybe, there’s something along those lines also for security culture, security practice, security tools because now you can introduce the security testing tools more effectively that shaped the culture from the “no culture” to the “yes but culture”.

Scott (43:19)

Yeah, absolutely I mean that’s kind of the very first thing we’re talking about. How do you effectively give people the information they need to make decisions and not give them the C version of it like… “Hey you gotta pop this on the stack and then take out all the stack and make sure you un-allocate space” blah blah…. Like yeah, you can do that and there’s lots of places that still happens. The linux kernel, that still happens and because it's super-efficient and then they create APIs for you to interact with so that it makes it harder for you to shoot yourself in the foot. And the same thing I think totally to your point can exist and in security, in applications security where I’m giving you enough information so that if you shoot yourself in the foot, you pointed it there and pulled the trigger. Like, it's still doable but you are actively making decisions to do it.

Lauri (44:14)

We are coming to our last question, and it has been such an interesting conversation but all good has to end sometimes. And it has been intertwining between the culture and the tools, and… my last question for the both of you would be to elaborate on the important characteristics to look for in the security testing tools and API security testing tools.

Scott (44:39)

I’ll go first. You know I have a bias here ‘coz I am an AP or Applications Security Tool vendor but, I think if you consider tools that have the end-user in mind, the person that is writing software, the person that’s fixing software, if you consider tools that do that first, you have a much better time ‘coz ultimately it’s empowering the people who are creating value and who are being incentivized by delivering and creating high quality applications at a high-rate. Their incentive does not create the most secure application there is. So if you think about when you’re doing “Oh I got a good one here”. If you think about that while you’re doing tool evaluation, and by the way, if you are a security pro evaluating applications security tools, and there is not someone from the engineering team in your evaluation doing evaluations from you, you’re probably doing it wrong.

Scott (45:29)

Like, they should be partnered up with you to try to figure out your job as applications security pros go like, “I know what tools are out there and what we can look at. Please come. Participate with me to try to do some evaluations on this stuff with what will work with your development cycle”. If you don’t do that last part, tools are gonna be where they were or show flair like, “Hey we rolled this up.” “Nope, we turned it off. It didn’t work” So I think that’s really key is, does the tool have the person that’s gonna consume the information in mind and are they making it hyper consumable, hyper-reactive and ultimately can you go from uhh… “I see that there is a problem, I understand the problem, I am now fixing that problem as fast as possible with that tooling” I think that’s the real… the real key to empowering software engineers with the applications security tooling and software and process.

Lauri (46:21)

Over to you Darren

Darren (46:23)

Yeah, I think we're gonna be talking again along the same lines here because I would say yes it’s definitely the automation and the ease of integration ‘coz there’s this kind of old school security paradigm where you have one security tester in front of the keyboard opening burp suite testing API. These tools that work the time at least the old-school tools kinda enforced that paradigm and you can manage this kind of order, style, test once a year maybe, every six months if particularly effective. If you are testing your software for security every year, then every year you’re going to have a long shopping list of things you need to deal with. But, the way in which directions are evolving is just increasing and increasing. So, to be able to have these tools that you can automate and integrate into the developers’ platforms themselves into this CI/CD itself is so important just to have the automation of that core, and to shift the responsibility away from the security team at least in part towards the developer so you can have that kind of iterative process. It’s all about the iteration and it's about making sure every release has this testing, not that one person has looked at it like last year, at some point it's so important to be able to deliver this test quickly and directly into the hands of the people who can do something about them.

Scott (47:59)

Yep, I totally agree with what Darren said, like, it’s so important to be able to test early, test often, test a lot. Baseline the thing when new stuff pops up you probably just introduce it in your pull request, and being able to go back and go, “This is the code that I changed, my issue probably exists here somewhere”. It’s so important as opposed to like, “Hey there’s a problem.”, and now search all of the source code along, you know whatever, making it smaller and iterative like the Agile process and development process so important.

Lauri (48:37)

I’ve been listening to some podcasts and there’s one podcast particularly where they asked this great question at the end “What are we not talking that we should be talking about?”

Scott (48:48)

What are we not talking about that we should be talking about? I mean, the end of Spiderman: No Way Home probably but, what’s the unknown unknown. I don’t know! I think Moxie just did a really good write-up on Web 3 and all the stuff that goes along with that. What are the security implications of, you know, he was talking about Web 3’s about decentralization but there are centralized platforms that are giving people access to these decentralized platforms right. So, consolidated, interesting consolidation on access to decentralization, which is sort of hilarious. Did a really good write-up and what is the, well obviously there’s privacy implications in that right? So, those companies have the ability to go “this wallet, and this wallet and this wallet and this wallet are all the same people. And so how do you, what’s the privacy security implications of those gateway services in Web 3? That he, I think he made a super good point even though he wasn’t specifically calling it out of I don’t think but just talking through his experience on developing Web 3 and in Web 3 environment and what that potentially could lead to, I think that was super interesting. Obviously much more to evolve there.

Lauri (50:02)

We’ll be sure to add the link to that post in the show notes. Some people get to do a deeper dive on that but Web 3 really seems to be coming up fast and it’s fun to watch when most people are still having their popcorn out, and not having made their mind on this whole thing and then the people are slowly coming forward and saying “This is my… this is my point in view in that.” We really are looking forward to very interesting openers' conversation and point of views on that. Darren, what should we be talking about that we are not talking about?

Darren (50:37)

If we’re talking about generally and security, I would say supply chain issues, we are, I mean it’s kind of cheap because the conversation has kind of started about there is, picking up speed but, I don’t think it’s picking up speed fast enough. All these dependency-based attacks are starting to become a considerable problem and based on how we see it, it’s been a considerable problem for some time now. But, we are only just getting the steam started behind that conversation so I’d say that’s something we need to be talking about considerably more.

Lauri (51:16)

Now it’s time to say thank you Scott for participating. Thank you Darren for participating as a conversation partner and again, it was a wonderful discussion and everyone else on the line you get to hear more about this subject again in the next conference coming up in the beginning of March and you’ll find the links on the show notes as well thanks again.

Scott (51:39)

Yeah, thank you for having me. It’s super fun to chat out with you guys today about application security and like all the stuff that goes around it like, you say application security and you think about like, tools and people and there’s a ton of like, culture and process, collaboration, all kinds of good stuff that goes into it. So, I super enjoyed chatting with you guys about it today.

Darren (52:00)

I think it’s quite interesting how quickly the conversation shifts from applications to security to kind of, the concept just behind it and always seems to come back to contract to language to be fair. Thank you for having me here. It has been very fun talking to you guys.

Lauri (52:16)

Thank you for listening as usual we have enclosed a link to the social media profiles of our guests to the show notes. Please take a look. You could also find links to the literature referred in the podcast on the show notes alongside other interesting educational content. If you haven’t already, please subscribe to our podcast and give us a rating on our platform. It means the world to us. Also, check out our other episodes for interesting and exciting talks. Finally, before we sign off, I would like to invite you personally to The DEVOPS Conference happening online on March 8th and 9th. The participation is free of charge for attendees. You can find the link to the registration page from where else than in the show notes. Now, let’s give our guests an opportunity to introduce themselves. I say now take care of yourselves and see you at The DEVOPS Conference.

Scott (53:06)

Hey everybody I am Scott Gerlach, Chief Security Officer and Co-Founder at StackHawk. StackHawk is an application security platform focused on developers helping them find, fix applications security problems while they’re running code. I worked in security for about 20 years - GoDaddy, SendGrid, Twilio, and a couple of jobs here and there in between. So, going from a practitioner of application security and/or in charge of application security to a “Hey I really need this tool” and a maker of tooling and theory on process and all that stuff has been a super interesting journey and I hoped you enjoyed our chat today.

Darren (53:44)

Hello I’m Darren Richardson, I am the Cloud Security Architect for Eficode. I have been working in DevOps and security for the past 4 or 5 years and thank you for joining me on this podcast.