The topic of today is threat modeling, a practice of identifying and prioritizing potential threats and security mitigations. Our guests are Anne Oikarinen from Nixu, and Nicolaj Græsholt from Eficode.

Anne (00:05):

Threat modeling is something that shifts security left. We can find problems early. If you think about for example that you find a security problem or a bug in your production, if we have done penetration testing, we could have found that. If we shift left, we could have done some static code analysis, or scanning. We may have found the problem then, but if we shift left even more, we could have caught that problem with threat modeling, and that's way earlier than any other phase of security reviews or tests.

Lauri (00:37):

Hello, and welcome to DevOps Sauna. Today, we have an exciting talk coming up. Our guests are Anne Oikarinen from Nixu and Nicolaj Græsholt from Eficode. The topic of today is threat modeling, a practice of identifying and prioritizing potential threats and security mitigations. You can find the speaker introductions at the very end of the recording. So let's get going right away with Anne and Nikolaj.

Lauri (01:07):

Today, we're talking about threat modeling, and I really want to start by getting forward to Anne to answer the question, what is actually threat modeling and why is that important.

Anne (01:21):

Well, threat modeling is ... Well, if you want to summarize it really quickly, it's about thinking that what can go wrong, what bad can happen and what can we do about it to lessen the impact or to make it somehow less bad. Threat modeling helps us to identify problems very early in development, even before you start implementing anything. It can reveal also logical flaws and problems that could happen in the architecture or the process. It's really hard to pen test a process, or it can be difficult to find flaws that occur only at times. That's what threat modeling can do actually, and really quickly even. The benefits of threat modeling are that you can deliver better software, actually, you can prioritize what your security measures and you can focus security testing on the most risky parts of the system.

Lauri (02:17):

There's probably a very specific reason why you call it threat modeling. I understand there are also similar terms which may or may not mean the same thing. Let it clear out for the audience like threat analysis or threat assessment.

Anne (02:34):

Yeah. Threat analysis, threat assessment. It's the same thing. Sometimes, even people talk about risk assessment and meaning the same thing thinking about potential security or privacy problems in advance. Modeling, well, because it's a systematic process in a way that you try to be structured and think of all the angles and not just, "Okay. I thought about this thing. Let's go on with the process."

Lauri (03:01):

Very interesting. It almost sounds to me like a framework for structuring your thoughts related to security so it will help you make sure that anything and everything you ever have to think about, it almost takes you through that thought process if I listen to how you're describing it.

Anne (03:19):

Yeah. That's the idea. There are actually several methods for doing threat modeling. Some methods appeal to some people. They're more suited to their thinking, but I guess you could use several ways to get to the same results. That's okay. I think sometimes it may sound like this is about being really pessimistic. Okay, I think all the things that could go wrong, that's very nice, but that's not the idea. It's about thinking of all the possibilities, and then narrowing down that, okay, this is not likely to happen, or this is not actually a big impact. And then, we can prioritize them or skip them.

Lauri (03:59):

Yeah. In many frameworks, and I believe it's also the case here, the hard part is to see what is not there. When you have a piece of software and you're looking at the source code and you're looking at what it's doing, you can see all the code, and there is a way for you to analyze what is there, but then you have to think, "Okay. What is not here? I should be thinking about what is not there." It sounds to me like this could also help think also those aspects that are not there, but you should be aware of them.

Anne (04:32):

You're right.

Lauri (04:33):

How did you eventually get interested in this? This is probably not something that immediately comes to your mind when you build and I start a career as a security specialist. What got you interested in threat modeling?

Anne (04:46):

I think I heard about it around five or six years ago. I was then working at the National Cybersecurity Center of Finland, so my work was about, well, helping companies or individuals to scope if they had a data breach or incidents, and what they should do. Also, I was writing vulnerability advisories, but basically, I was always working with teams or people who had suffered a security incident already. And then, I started thinking, "Okay. Is there a way to notice this earlier?" And then, I don't remember actually where, but I heard about the term threat modeling and I got interested in that.

Anne (05:28):

It really resonated to me because earlier when I was working at a software development company, there was, again, this thing that you should know about security problems in advance, but then I also had been in some hacking courses. You don't really learn how to hack in a two-day course or even a one-week course. That takes time. Also, I believe it's not something that every developer or tester should be a hacker to find out security problems that way. It struck me that hey, threat modeling is something that you get to find and identify really serious and real security problems without being an ethical hacker.

Anne (06:15):

And then, I read this book about threat modeling by Adam Shostack. I got so interested in that so I bought this book and that convinced me even more that hey, this is something useful and this is something I want to do. Actually, later I started working as a security consultant at Nixu and spent there some more time learning how to be an ethical hacker, but still, I think that the combination of doing threat modeling and then doing also security testing is the most effective thing you can do because you cannot find all the problems by testing, and maybe not all the problems by threat modeling either, but the combination is really excellent.

Lauri (06:54):

Yeah. I'm thinking of you Nicolaj now, from your perspective, because you introduce yourself as a background in security, but now you're basically working with people and with teams developing better software. I'm thinking the upcoming question about DevSecOps and the security tools which are built as part of software development and really the process, and I'm pretty sure that when you look at software testing practices and development practices, there are security tools integrated in there with this idea of what somebody called as DevSecOps. And you could argue that there are so many automated security tools that are integrated in the CI/CD pipeline that begs the question that everything needs to be there. I'd like to hear your thoughts, Nicolaj, because I believe you are coming from a slightly different angle, and try to get some conversation going on between you from these two angles.

Nicolaj (07:52):

Sure. The nice thing about these tools that we integrate in our CI pipeline is that things become automated. We take some of the responsibility from the developers so they can focus on the development. And then, we let the CI pipelines do all the boring stuff like security. We don't really want to think about it. We just want to automate that away. Anne's story got me thinking about: how you get people interested in threat modeling? Sometimes, when I'm out in the wild, I need these arguments. You mentioned that you mostly work with people that have experienced incidents, and because they've experienced incidents they know what will happen if you don't have a secure system, it gets breached one way or the other.

Anne (08:45):

Well, not only with people who have experienced the incident, but I think also many developers have just noticed that hey, this is really cool because now I found this potential issue even before I coded it. No. I'm not wasting time doing something and then somebody tells me, "Hey, this is insecure." That's I think one of the main reasons. I'm not saying that any automated tests wouldn't be useful. They really are and I love those. There's no way that this would be you pick one or the other. I think you should have both.

Anne (09:24):

Actually, also one thing that I've been noticing when you talk about threat modeling or if you Google threat modeling, that sometimes people seem to assume that this is something that you do for an entire system, which sounds burdensome, but then I think it would be better to threat model epics or user stories so it really in short batches so it doesn't take much time to think that, "Okay. Is there a security impact? No. In this chain, there's no security impacts." Then we take another story. We're actually introducing a new element to the architecture, or we are messing up with personal data now. We didn't do that before. We should think about security and privacy a bit more. And then, it can be simple.

Nicolaj (10:12):

Yeah, because it becomes quite a daunting task. I can imagine a parallel would be to introduce testing in a system if you then agree, "Okay. We're going to start with testing. Well, where do we start? Let's test the entire system from the beginning."

Anne (10:28):

Yeah. Not maybe that efficient unless you haven't done anything so far, and then maybe it's something you need to do. I guess it would be interesting to compare that if you have security test automation in place and you fix those findings as well, then have a penetration test and then see if there was something that the automatic tools missed. It could be something logically led for example, because tools are getting better and better, but basically they are testing patterns. There's no machine learning so far or artificial intelligence in those security tools. They are not thinking like you must do. You must think actually in very nasty ways that if I combine this vulnerability with this vulnerability, then I get access to the entire system.

Lauri (11:26):

Let's take a hypothetical company where Nicolaj is working with. They would probably already have automatic security tools integrated to their CI pipeline and they may do some penetration testing already. Maybe they do, maybe they don't. If we were to build a case for including threat modeling into that entire toolbox and the processes of taking a more comprehensive view at security, how would you start doing threat modeling? You already referred to one part of that, but for the whole system versus epic user stories, what else is there to really integrate threat modeling as part of your entire security processes?

Anne (12:13):

Well, it depends a bit how the team is working already. I wouldn't change the whole development process if they are producing something that works, they have a process already. One thing that could work, for example, if they use the definition of "ready" and definition of "done", so it would make sense to have in the definition of ready a checkpoint that, "Okay. Is there a risk in this feature that we are going to implement right now?", some criteria that, "Okay. Are we adding new stuff, changing the architecture, or do we use different kinds of data than before?" Things like that.

Anne (12:53):

And then, if you notice something that matches, then do a quick threat modeling to it, so basically based on a Confluence page, maybe that, "Okay. These are the problems. Here's a couple of frameworks like evil user stories," try to think about potential problems, and then also think about the important part here, what can we do about it. Should we implement another check? Should we add some other security control maybe from a web framework or provided by the platform and then put those to the backlog?

Anne (13:26):

Also, maybe think that is this something that we should test especially, do security testing or code security reviews, and then in the definition of "done", have quick checkpoints that, "Okay. Did we by the way do that threat modeling? If we didn't do that early, did we update something? Did we pass the review or did we pass the testing?" Something that will fit naturally into what they're already doing, and not making it too complicated, that's what I begin to do. Also, if you have these security testing tools, of course, make sure that the results of those are being checked and used because there's no worth having a security test running and producing reports that nobody even cares about.

Nicolaj (14:15):

Right. Yes. I've seen the letter far too often. You have these yellow and red flags all over, and yes, it's fine. Then we know the tests are still-

Anne (14:27):

Yeah. We're running the test. We don't look at the result.

Nicolaj (14:30):

Exactly. We know the tests are running at least. It's very useful advice, introducing freeform notes while people are working on the task, and then revisiting them afterwards when they feel that they're done because it lowers the barrier to entry a lot from having to fill out a specific form in a specific system somewhere.

Anne (14:54):

Yeah. No forms or anything like that. It's something that you're going to write quickly. I think that's enough. Well, I think that the minimum requirement for documenting is that you or your teammates can read it the next week and still understand what this thing was about.

Nicolaj (15:16):


Anne (15:16):

No need for writing extra documents.

Nicolaj (15:18):

I like revisiting it in the end when we feel that we're done. We've at least made explicit decisions about what we are going to fix and what we're not going to fix or how we will prioritize the things.

Anne (15:31):

Yeah. Well, it's sometimes understandable or the risk is so small that there's no point doing anything right now, but it is really good to have that info that now we decided to do this because later on, if you have more resources or the situation changes for example, you put your internal application to the internet, it's a totally different scenario. Then, you can check that, "Ah, we made these changes. We need to address them now before we should do this."

Lauri (16:01):

What you said there, I started to think about the roles in a typical software development team or the setup. Where do you typically see that these tasks fall? You said that not every developer needs to become a hacker by definition, but what advice would you give to software development leaders or software development professionals to what extent they should be acquainting themselves with threat modeling, or that if they want to adopt threat modeling as a way of working, what would be their natural role in software development team to take that responsibility?

Anne (16:36):

Well, many teams actually elect a security champion among themselves. Then, there's no need for that security champion to be any security expert or anything like that. Somebody who is interested about security and would take some lead in the team that, "Okay. Now, we should do that threat modeling or contact security team or be in communication with some external security specialist to get help if needed, and maybe facilitate threat modeling workshops or other security things." It's usually there's someone who's willing to spend some more time thinking about security. But then again, I think every team member should at least understand why are we doing this and what's the importance, and also about the context that what are the threats to the business as well if security fails so they can think about it in their work.

Anne (17:36):

Sometimes, it's more useful to have security coding guidelines, but then of course they need to be understood by the team members. I would approach this maybe the same way as you do with quality that everyone in the tI would maybe go for a security champion role might be useful. And then of course, PO role is important as well because somebody has to prioritize that are we doing new development or are we doing security fixes. It's not going to work if there's a push for let's do these new features and not worrying about the technical depth or security problems.

Lauri (18:22):

Interesting. Yeah. I'm thinking about two other topics related to these roles because we ourselves at Eficode, we are huge advocates of practice communities, and some organizations might have a practice community approach to a thing like quality or security. Some other organizations have center of excellence so they have consolidated the people with deeper interest and probably more focused efforts on those areas into the formal organizations. What are your experiences about either practice community around threat modeling or center of excellence about that, or is it more on a general security level?

Anne (19:04):

Yeah. I think if the company has those already, it would be a really good idea because then you can spread the knowledge from team to team. They're probably working with a slightly different project. You could say that "Hey, we found this thing, and this project might apply to you as well," or they have this. For example, it could for security tooling work really well that this is how you integrate this tool, and you could share a threat modeling experience. Yeah. Good idea.

Lauri (19:33):

Any thoughts, Nikolaj?

Nicolaj (19:34):

I really like the idea of the security champions, having a designated person in the team who has the ownership of the team's interest in security. Of course, you need everyone in the team to take part in it. For quality in software, we need to agree that we will uphold a certain quality. We would agree that we would try to make secure software, but the responsibility doesn't fall between chairs. We have this saying that if nobody is the owner, then who's to make sure that it actually happens?

Anne (20:10):

Maybe we could have a combination of those. We could have the security champions and then the security champion practice community so they can actually share experiences. Of course, it depends on the project. If it's a really large project where there are several teams, maybe each security champion in the teams needs to work together, especially if one team is making a change, maybe to a platform that then the other teams need to understand what is the security implications are to us. It needs a lot of correlation there.

Lauri (20:44):

Yes. Collaboration.

Nicolaj (20:47):

The favorite word.

Lauri (20:48):


Nicolaj (20:49):


Lauri (20:50):

Yeah. Practice communities around security champions. Yeah. I'm thinking, how does it sound if you, Nicolaj, think about, without naming individual customers, but when you think about the readiness for overall software development organizations to adopt this approach, where do you think typically organizations are in their maturity to adopt something like this?

Nicolaj (21:11):

I think it makes a lot of sense for a lot of them to start adopting it. I also think that with the examples given, introducing it on a epic or user test basis, that you are lowering the bar as much as you can down to the task level, so that it's something that you could introduce. If you're already working in a way where you are cutting down your problems into individual tasks, then I think the only thing an organization would need is to just prioritize it and to give the developers the time for taking the freeform notes, revisiting it afterwards. A lot of the times when we go out somewhere, we see a tools team that are just so busy either maintaining a large platform or fighting fires on a number of tools that they have already introduced. It might feel daunting to then add work to each of these tasks until you realize that this is as important part of the tasks like implementing the solution or figuring out what solution is the right one to work with in the first place.

Anne (22:26):

Yeah. You will definitely need time that there's no way around that, but does it actually help that threat modeling doesn't need any tool specifically? There are tools for threat modeling, but that's not necessary. If you can draw something and write something, that's the minimum requirement.

Lauri (22:44):

I like that you said earlier that it's maybe not replacing anything you have already in place, but it's adding it on top of or in addition of everything there is, so that is also a safe way for organizations to start practicing it because they are not introducing additional attack vectors in their way of working by introducing something additional. There's no way, or it's hard for me to imagine that by introducing threat modeling you're potentially weakening your security. It can only go in a better direction.

Lauri (23:20):

And then, if that is true, then maybe it can link back to the blameless post mortem part that you go back as a team and you think, "Okay. What went well this sprint in terms of threat modeling, and what went wrong, and how can we improve it?" You will basically constantly become better until you get to a point that you think that "Okay. It took a while and we had to learn and we had our mishaps, but now we have gotten to a point where we can really see the benefit," and you're doing it in a safe way so that you're not making drastic changes in the way you apply security and approach security. While you are becoming better, you're introducing regression in your security so to say. I'm not sure if I'm able to explain myself clearly enough here, but that's what I thought when you were discussing.

Nicolaj (24:15):

I like the idea about the explicitness of which decisions that were made. I'm still very young in software development in terms of only have been here for three years in the wild, but having discovered a tool like decision logs a couple of years ago where we can actually write down what we talked about, and again, in freeform and we can go back to it when something goes wrong, or when we're looking at why did we design a system like this, and then we have our arguments. The same thing with threat modeling that you can actually revisit the decisions that you've made, what we're fixing, when you have the blameless post mortem what went wrong, was our worldview the wrong, or didn't we prioritize what we implement correctly, or did we even think about this.

Anne (25:09):

Yeah. I think it's good that you think that it's a good idea to write things down because I think the worst case that you can have is that you develop an application quickly, but for some reason that you need to develop it quickly, sometimes it happens, but then you don't document anything about it. Then, it's a really big question mark that is the security posture of this thing because you don't know the decision anymore if you weren't in there. Actually, even if you were there participating, you really quickly might forget what the details are.

Anne (25:44):

I actually have an example for this. We were threat modeling epics with my customer teams. We started on it, on the epic thinking about what can go wrong. We identified several things, but we run out of time because we're doing other things in the workshop as well. And then, when we continued next week, everybody was at first, "This sounds a bit familiar." Then, we remembered that "Ha! We already thought about this", and then we scrolled down, looked for our notes, comments in the epic and Jira, and then we figured out that okay, we had identified different things on the second go, but it was really good to have the previous notes because there was loads of to-dos and things to implement because there were really good notes about not doing validation properly. People really thought about all kinds of evil things that could happen. There were a few people there, and everybody had forgotten in that week what we were talking about. That's really crazy. If you are busy and you're working with different things, you might really easily forget why we chose to do it this way.

Anne (27:02):

Also, I like the idea of continually improving and learning because it's not that you need to do it perfectly for the first time. It might sound scary that again: "OK, I need to learn this threat modeling method". It sounds difficult. It's good to do it, do something and find a few things, and that's that's okay. And then, you can aim to find at least one threat first, and then second time, let's try to find two threats. It's not mandatory to be perfect at the first time. Doing some threat modeling is much better than doing no threat modeling at all.

Lauri (27:47):

It's Lauri again. In highly regulated industries such as finance, both security and compliance are key. Agile methods, self-organizing teams and daily releases might seem to be only for unicorn companies, but that's not true. We recently ran a webinar with Jesper Eriksen from Bankdata in Denmark. They shared their DevOps journey, and you can hear how DevOps practices and tools help integrate security and compliance requirements in software development. You can find the link in the show notes. Now, let's get back to our show.

Lauri (28:21):

There were some terms that you used earlier, and I wanted to come back to them and give you a little more time to maybe go into the techniques. Many of us and many of the listeners are so intrigued in different approaches. You said that evil user stories, that was one term that you used, and then there was a STRIDE model that you used. Maybe if we take a few moments just to enlighten our audience as to the other or the different threat modeling techniques.

Anne (28:51):

Yeah. Sure. There are several threat modeling techniques, which tells that none of them are perfect. They are useful maybe for a certain purpose and people have different mindsets and ways of thinking. That's why there are several. It's okay to also use a few techniques. Evil user stories is something I like a lot because it's maybe some way to understand based on user stories, and also it helps to think about the problems in features, whether they are for end users or admins, but the idea behind evil user stories is that, well, either you think in a user story format that, "Okay. I am a cyber criminal and I want to steal credit card numbers to make money," but I've noticed that sometimes you run out of ideas of what would a cyber criminal want to do.

Anne (29:44):

I prefer to use another way that you first think about assets or anything important in your application or system that you want to protect. For example, personal data or your algorithms, your credentials or certificates and sign-in keys, and list those. And then, take for each asset that okay, what bad should not happen to these important data or resources? You can complete the sentence. An attacker should not be able to do what? Or a user. It's not always about attackers. It can be a mistake. Attackers should not be able to purchase stuff on our website without paying for it or add a measure to be able to delete files on the system accidentally, or many simultaneous users on the website should not be able to crash it. Things like that.

Anne (30:40):

And then, after you have a list of this and negative scenarios, you start to refine them a bit that okay, how would actually the attacker be able to buy stuff without actually paying for it, or how would the site crash, and you list some scenarios, and also the more important part you think that how could you prevent this. Well, if it's about seeing somebody else's data, it's just thinking, "Okay. You can get somebody's password and that's the easiest way to get to see somebody's data and account. You don't need any injections or any technical vulnerabilities for that," but vulnerabilities is another option so you can maybe then draw the conclusion that we should scan for known vulnerabilities in our code and we should make sure that if you try to enter the wrong password very many times, then you would lock the account temporarily. If somebody is trying to get into somebody else's account, you could prevent it by maybe have login and monitoring to catch those things. It doesn't have to go into very techie attacks. It can be really simple things. That's the idea of evil user stories. What do you think? Would that work in your context?

Nicolaj (31:55):

I think the evil user stories sound like a great tool because as you mentioned in the beginning, you might not want to have every developer become a security expert, or you might not already have that every developer is a security expert, and it will help them get into the mindset of how a user, malicious or benign, could be misusing the system to gain access to assets one way or the other. It also sounds a bit fun, like a game day, you're role-playing, how do we break this thing that we've been working on, or how would a malicious user try to break the thing that we're working on.

Anne (32:39):

Yeah. It can't be really fun thinking of all the ways. You get to be creative. That's actually something I like about security testing and also threat modeling that you can really brainstorm and get to think of something that nobody else had thought before.

Lauri (32:58):

This is maybe not the comprehensive list of what you had in mind for techniques, but one you've mentioned was STRIDE.

Anne (33:04):

Yeah. STRIDE is ... I guess you can't talk about threat modeling without mentioning STRIDE. It's quite a technical threat modeling method for especially modeling data flows. You have some data sent by a process and it's received by somebody else. All the letters in STRIDE come from a specific threat type, S for spoofing, and then there's tempering and repudiation and information disclosure, denial of service and elevation of privilege. It's especially useful for finding flaws or weaknesses from the architecture. There are actually even playing cards for that called Elevation of Privilege, by Microsoft. You can download the cards online, as a PDF at least. I think somebody is selling those cards as a physical card deck. You can have a game session that showing poker style, "Hey, I have information disclosure here," and then you of course try to identify where in your system you would have that information disclosure, and you can score points if you find more threats than the others.

Lauri (34:12):

I hope there are negative points though, considering that the ... Well, finding them is a good thing. Leveraging them is a negative point. Interesting.

Nicolaj (34:19):

I heard a game session. I'm all sold.

Lauri (34:23):

I got interested in that because very, very recently we introduced a game for another purpose. That was for playing a pipeline game on a continuous integration, continuous delivery pipeline. My thinking was immediately going like, "Okay. Can we take that engine, gaming engine that we developed and could we model the playing cards on that gaming engine?" Yeah. We should know more about that first. But, Anne, you can check out our pipeline game. If you just look for pipeline game Eficode, you will certainly find that online. Yeah.

Anne (34:59):

I need to check that. Talking about games, I actually have another one. Nixu developed something called Cyber Bogie. These are cards that have different attacker and other harm doer types, so there are these stereotypic characters that could harm your security and privacy. We have all these really stereotypical things like script kiddies and nation-state attackers and supply chain malware there. We also have both social engineering victims who love the developer, profits first marketer. If you have trouble finding okay, who would actually be motivated to attack or do something, you can think about the cards. We actually have a game for it. It's in our GitHub, so you can download the cards freely and see what kind of attacks you would find that way.

Lauri (35:54):

Profits first marketer, really?

Anne (35:55):

That resonates to you.

Lauri (36:00):

Yeah. Resonates in a shamed way. Maybe marketers shouldn't be that much about profits and more about what benefits it brings to customers. Well done there. I'm starting to look towards the end of our conversation. There's only one question remaining, and then I'd really like to give the floor for both of you if there's anything else you want to say. My last question would be that, okay, now you have been sold on this concept and you have been introduced to some of these techniques. Maybe you have adopted the blameless post mortem as a way of continuously improving it, and maybe you have installed your practice communities for the security champions, everything what we have discussed. Where is good enough? How far you should take it to say, "Okay. I understand that it's never ready. That's what I understand." Where is this 80%, 20% rule that okay, now it's good enough and now we can be satisfied about where we have come?

Anne (36:58):

There are a few aspects to consider that have we done a good enough job about threat modeling. For example, if you're thinking about the architecture threats, you could think that okay, have we thought about something for each architecture element, for example, database or any data storage, or have we thought about all the features in the system, or if we are doing it in batches all the features that we are going to release next? Have we thought about all the harm doers we have identified if we earlier thought that okay, we are worried about the script kiddie and somebody internal making a mistake and also the marketeer that then think that okay, have we thought about what these persons could do intentionally or unintentionally? Did we have something to tackle these threat scenarios? I think also it's good to remember the mistakes that could actually happen and harm the security.

Anne (37:53):

Also, maybe consider that have you asked the viewpoints from enough people if you shouldn't do it on your own because you have your own viewpoint? Maybe you're thinking about only the development part or only the business part, but have you asked for example the testers? They might have really good ideas on how you can break the system. That's something to consider. I think that's enough. If you have found that you haven't thought about all scenarios, then revisit the threat model. It's more important to keep updating it continuously than thinking that, "Okay. Now we have done it once. Now we are good to go and we don't need to threat model anymore." Updating it a bit. It's not going to find all the things, but if you have other security practices like those security test automation tools, you're doing code reviews that's going to cover the things that you may have missed. There's other things to consider.

Lauri (38:53):

How would you extend that, Nicolaj, when you look at it from software development practices' perspective more broadly? What thoughts does that raise?

Nicolaj (39:01):

Well, I really appreciated the comment about: have you included the right people or have you included more people? This can also foster collaboration from an entirely different field, writing. When you are done writing something, then you correct your own typos, but does that mean you're done? You can't correct your own work. Of course, you should be including more people in working on these threat models together. I think it ties very nicely into whether you call it DevOps or DevSecOps.

Lauri (39:37):

Where should people go to get ... You mentioned one book in the beginning. Let me ask it more broadly for you, Anne. People online who are listening to this and they think, "Okay. I really need to get to the bottom of this," where should they get started? What are your top first three things or first resources of information you would advise people to look in to get them going in this practice?

Anne (40:01):

The book I mentioned was by Adam Shostack. It's called Threat Modeling. And I’m just trying to think of the top three things. Maybe you can start by learning one threat modeling method. Maybe evil user stories are something to start with or STRIDE, and then try to use that first. And then, when you think that, "Okay. Now, I can use this," and then find other sources. Yeah. I might think for a few links to give you, but I don't have anything on the top of my head, except for that book. It's a really good book.

Lauri (40:38):

Yeah. Maybe we put them in the show notes so people can go and refer to them later. Thank you for them a lot. We'll help people get on the right journey and make it easy for them to get on board. Any last comments you'd like to make before we close? Let's start with Nicolaj and then give the last words, closing words for Anne.

Nicolaj (41:00):

I think the closing words would just be listened to what Anne says about getting started with this, working it into a sprint or how you work with the tasks, do the freeform notes when you are figuring out which parts of your system that your new feature or the maintenance that you're doing will be touching and what changes that we'll make, and then revisit it when you're done. What did we change in the system and what do we need to think about? Did we introduce some new possible vulnerabilities, or are we actually done with the task or feature that we were working on? And then, just get started with it. It sounds super interesting.

Lauri (41:42):

Yeah. Just get started with it. I think that's precisely the right attitude. Yeah. Anne, the final words from you.

Anne (41:50):

I just want to say that threat modeling is something that shifts security left so we can find problems early. If you think about, for example, that you find a security problem or a bug in your production, so if we have done penetration testing, we could've found that. If we shift left, we could have done some static code analysis or canning. We may have found the problem then, but if we shift left, even more, we could have caught that problem with threat modeling, and that's way earlier than any other of these security reviews or tests.

Lauri (42:27):

This is the superior observation. I think it speaks to the hearts of many, many people who talk about shifting left in their respect. That's a wonderful conclusion. Well, time runs and I'd very much like to thank you for joining us. As said before, we'll add the reference to the book in the show notes. We'll add the important getting-started material in the show notes as well. Thank you, Nikolaj, for joining, and thank you, Anne, for such a wonderful conversation.

Anne (42:58):

Thanks. It was fun.

Nicolaj (42:59):

Thank you. Much appreciated.

Lauri (43:01):

Thank you for listening. If you want to continue the conversation with Anne and Nicolaj, be sure to check out their profiles on Twitter and LinkedIn. You can find the links to their profiles as well as links to the content they referred to in the show notes. If you haven't already, please subscribe to our podcast and give us a rating on our platform. It means the world to us. Also, check out other episodes for interesting and exciting talks at DevOps Sauna. Finally, before we sign off, I'd like to give the floor back to Anne and Nikolaj to introduce themselves properly. I say now take care of yourself, and remember to secure your software leverage chain.

Anne (43:38):

My name's Anne Oikarinen. I'm a Senior Security Consultant at Nixu. I've been working pretty much all my career evolving around software security. Actually, I started as a software tester, but pretty quickly realized that I'm good at breaking stuff, so I got interested in security and studied that as well in university. Currently, well, what I do is I help development teams make it more secure software. Well, I think that security is best built-in, so I really much like to work with development processes, how to actually include security work in a simple way in the process. Also, I do threat modeling with the teams, so thinking what can go wrong and what can we do about it. Also, some security testing, but I think that threat modeling is one of my favorites when it comes to finding security problems early.

Nicolaj (44:33):

My name is Nicolaj Græsholt. I am a DevOps Consultant at Eficode. I've been there for three years. My origin was at the University of Aarhus where I did a master's degree in computer science with a specialty in cryptography. Security has always been a thing I was very interested in. Now, I work with people out at organizations. I'd say that security still has a special place in my heart. When I had a chance to join this call, I just jumped right on it.