Busting security myths

Are password changes really keeping us secure? Is writing passwords in a notebook always bad? Can Apple devices really not get viruses? In this episode of DevOps Sauna, Pinja and Darren take on some of the most common myths around cybersecurity. From password managers, MFA, and VPNs, to cloud security, insider threats, and even the risks of random USB sticks—no misconception is safe. Join us as we separate fact from fiction and uncover how security really works in practice.
[Darren] (0:02 - 0:22)
Multi-factor authentication is the only thing stopping your account being compromised, even if you have an 89-letter password. Welcome to the DevOps Sauna, the podcast where we deep dive into the world of DevOps, platform engineering, security, and more as we explore the future of development.
[Pinja] (0:22 - 0:44)
Join us as we dive into the heart of DevOps one story at a time. Whether you're a seasoned practitioner or only starting your DevOps journey, we're happy to welcome you into the DevOps Sauna. Hello, and welcome back to the DevOps Sauna.
I am once again joined by my co-host, Darren. Hey, Darren.
[Darren] (0:44 - 0:46)
Hey, Pinja. How's things going today?
[Pinja] (0:46 - 0:51)
It is a really fine September morning. It's gonna be better, I would say. How about you?
[Darren] (0:51 - 0:55)
I mean, it's Monday. I think that sums it up about as much as I need to.
[Pinja] (0:55 - 1:34)
That's true. And throughout this year, 2025, we've had a couple of episodes where we talked about myths, and we've been trying to bust these myths. So let's get into one of these topics again.
So we are getting to combine today this favorite way of presenting things, and one of our favorite topics, which is security. So today, we're going to be busting some security myths. In the past couple of years, and in the recent year, there have been events that show us that our companies that we work with and everybody else in the industry are not taking security that seriously, and it's not exactly going very well based on the statistics either.
And there are a lot of misconceptions. So I think it's time to bust some myths.
[Darren] (1:34 - 1:48)
Yeah, I would think so. I'd say we see a lot of variation in practices. Some people are doing things absolutely right, and some people are doing things absolutely wrong, and there's a lot of middle ground in between.
So let's see how much of that we can cover.
[Pinja] (1:48 - 2:05)
Let's see about that. We've got a couple of different topics here that we want to discuss today, but maybe one of the most visible things to people is the passwords. And in many organizations still today, the practice is to change your password every 30 days and consider that making you secure.
Is that true?
[Darren] (2:06 - 2:14)
No. And actually, I probably should give more than one word answers if we want this podcast to last more than five minutes.
[Pinja] (2:15 - 2:15)
Yeah, they... Probably, yeah.
[Darren] (2:16 - 2:44)
A lot of companies are actually shifting away from password changes now, because password changes really care about two kinds of attacks. And one is like the shoulder surfing, bad password hygiene things, where someone watches you type a password in or you write a password down, or database leaks. And these are two...
I mean, we could actually go into database leaks a little bit in the next question. We have a little list we're working on, so let's not double up here.
[Pinja] (2:44 - 3:00)
Yes, let's not. But we've always been encouraged to write a very long, complex password with symbols and told that those are actually better than passphrases. But my understanding is that is not always the truth.
Is that correct?
[Darren] (3:00 - 3:26)
Yeah, because it's like, what is a passphrase even? There is no functional difference between a password and a passphrase. So there's this XKCD comic that...
If you just Google XKCD, correct horse battery staple, it will get you the comic. And it's this idea that we have been told to have passwords that are 12 letters long, that have lower and uppercase and a symbol and a number. And we think that's secure.
[Pinja] (3:26 - 4:07)
Obviously. So it's kind of like gibberish, whatever it is. And sometimes, even if we think about...
I know we're getting into the databases soon, but you might have a master password for the database. Is it something... How can you remember that?
And are you making, for example, some changes into the gibberish word to make it more memorable, to become a non-gibberish base word? Is it something that is uncommon? Is it a passphrase that you only remember?
So let's take, for example, my apple tree is full of apples. And how many substitutions can you make? For example, did you have an O in there?
Can you make that into a zero instead? Can you make some caps in the middle of that? Did you make an A into a four instead?
[Darren] (4:07 - 5:19)
And what you're kind of working towards there is the idea of entropy. And that's what passwords have been designed to do by including these substitutions. But it's not a good way of making entropy.
A good way of making entropy is length. And the easy example is, if you have a password, which is six words, each word is six letters long, you instantly have a 36-letter password, and each additional letter after 12, 10, even if it's all in lowercase, will add something, like it adds exponentially to the time to crack. And that's why we're looking at databases, because when a password database is leaked, weak passwords can be cracked by various means.
But if your password is 36 characters long, the average time to crack that password is, I think, somewhere along the lines of two billion years. So changing a password then doesn't really matter. So if you make your password, like, obvious sandwich accident, there you have a password that's almost 20 characters long.
Do a couple of simple substitutions, it's easy to remember. So you want entropy. And entropy, the best increase of entropy, is length.
[Pinja] (5:19 - 5:34)
So basically, some guidance has taught us in the past decades, even, that if we come up with passwords that are hard for us to remember, we're secure. But then that also might actually make it too easy for the computers actually to guess, right?
[Darren] (5:34 - 5:49)
Yeah, I mean, we're starting to see it where maybe even 12 characters is not enough. Eight certainly isn't. And there are still services today that require eight-letter passwords.
And no, they're going to get broken in minutes if you have your password leaked.
[Pinja] (5:49 - 6:14)
That's very interesting. So let's talk about leaks when it comes to passwords. And we talked about the databases, right?
So many of us are nowadays using password managers. We have multiple services, and many people actually have it on their phones, like Apple passwords. So one of the myths is that is actually that the password manager would be unsafe because they provide a single point of failure.
So we talked about leaks, but is this actually true that these ones aren't safe?
[Darren] (6:15 - 6:52)
It's difficult to say. So I don't think password managers are inherently unsafe. The idea that they're a single point of failure is true.
If your password database becomes corrupted and you don't have a backup, then you lose access to every platform which was reliant on the password manager. Now, some password managers, like 1Pass, I believe, get around this by syncing your passwords externally to a cloud service, but then you have that attack surface where your passwords are no longer in your system.
[Pinja] (6:52 - 7:24)
So we have our password databases, so our password managers, such as Bitwarden, Bitbold, but then we use multi-factor authentication, MFAs. Many times in 2FA, you get a text message sent to your phone or something sent to your email, and then some people have said that this might be enough. Or let's say, the other way around, if your password is strong enough, we're talking about the password, we just talked about how you can make it stronger, so you don't need an MFA because of this reason.
This is a very strange thing that I've heard people talk about.
[Darren] (7:24 - 8:36)
Yeah, this is so, like, the fact that some people are suggesting this is super weird, because, no, the whole point of, like, multi-factor authentication is it creates a second layer that is required. Multi-factor is, like, you have something you know and something you own, or in the case of biometrics, something you know and something you are. It's, like, it increases the requirement of attacking, like, MFA will help you if you have a really bad password, but if you have a really good password, you don't get to duck MFA.
And I do think we should also talk about MFA in terms of the data leak, because so far we've talked about password managers, but there's also, like, authentication leaks from websites. For example, if they have their database where the user tables are stored leaked, then they have encrypted versions of passwords there for all of the users, and those are another access. And maybe they don't even have them encrypted, maybe they just have them in plain text because they're from 1980 or whatever excuse they make, but then multi-factor authentication is the only thing stopping your account being compromised, even if you have an 89-letter password.
[Pinja] (8:36 - 9:13)
Yeah, so that's a very dumb assumption to make. But then again, we're talking about people, and how many people do we have on the planet, and how many different opinions do we have amongst the users of a service or a device anyway. One way to store your passwords apart from password managers is, of course, somebody's notebook.
I'm now talking about a physical notebook. Somebody has a notebook lying around. It's really nice that at least people have the idea that I need to have a different password for different services.
That's why I need the notebook. But is writing passwords down always insecure?
[Darren] (9:13 - 9:34)
It's always a practice that's discouraged, but it's like, if you have... You at home probably have a drawer which contains important documents and private files and things that need to be kept secure. I'm not going to say writing passwords down is a good thing, and if you write it on a post-it note stuck to your laptop, then that's no.
[Pinja] (9:34 - 9:34)
Yeah.
[Darren] (9:35 - 9:56)
But if you are going to write passwords down physically, you have to formalize the process by putting them all in one notebook and making sure that notebook is stored with your things like your passport in some kind of locked drawer or safe. And then it's like, sure, it's not the best practice, but at that point, it's by far not the worst.
[Pinja] (9:56 - 10:08)
No. It's like storing, as you say, having a post-it note on your laptop, so the password is attached to the device that the password unlocks, is, I guess, one of the worst ways of doing this thing.
[Darren] (10:08 - 10:12)
Yeah. It's like leaving the key in your front door.
[Pinja] (10:12 - 10:14)
The robber says, say thank you on that.
[Darren] (10:14 - 10:15)
Yeah, they're just going to walk in.
[Pinja] (10:15 - 10:42)
That's how you do it. Hey, devices, we just clarified that topic. So let's talk about devices instead.
So the world is in principle divided between people who like to use Apple products and people who do not prefer Apple products. And you're either an Apple person or not, but many Apple-owning people say that, oh my, my Apple devices cannot get a virus. But I think we've had some events that might show otherwise.
[Darren] (10:42 - 11:05)
The Pegasus spyware is probably the most high-profile one coming out of Israel. But I will say that the Apple ecosystem is by default more locked down than the Windows-Android combination. So it's hard to say that there's no truth to it, but it's not like they can't get viruses.
It's just more unlikely.
[Pinja] (11:05 - 11:39)
That's true. If you are an Apple user, do not fall into this false sense of security because you just have an Apple product. It is possible to hack into that, but slightly more unlikely than to other devices.
But if we think of connecting your device to Wi-Fi, and many times we are advised against using Wi-Fi networks, and yet people are still doing that, especially with their work laptops and work phones. And I've heard this claim that says, well, the network name seems legit. The name was, the name was looking fine.
[Darren] (11:39 - 13:17)
Okay. Like this is, there's actually a story attached to this one. And it's like something that I want to dive into a bit and it goes on a bit of a tangent.
So just bear with me on this one. So I saw this post on LinkedIn by this guy, Sven Johnson, and he was basically stating that free Wi-Fi isn't worth the risk. And honestly, that's, it's being viewed mostly as a kind of mid 2000s thinking because everything should now be delivered with TLS.
It should be encrypted. Everything should be going through a VPN. There are all kinds of safety precautions in place.
And so the thought that public Wi-Fi is not secure is in some ways outdated. In some ways it's not like if you connect to a hostile network, which an attacker is actively trying to pivot through, then it's entirely possible that you're just adding your own machine to the potential attack surface. So things are changing there and we're taking more security on.
So it's not really, I don't know that it's going to be valid for much longer to say that public Wi-Fi is bad. But the reason I wanted this one in here was to talk about what happened on LinkedIn following this post, because it's outdated. And what happened was the guy received maybe 30 or 40 comments from all these cybersecurity people coming out of the woodwork to basically point out the same thing as if they were going to get a prize for pointing out that one plus one is actually two.
[Pinja] (13:18 - 13:18)
Nice. Okay.
[Darren] (13:19 - 14:04)
Yeah. And I'm just like, I think the problem here is the lack of communication skills demonstrated by a lot of people in cybersecurity, because we are in a position where people have to be able to approach us and discuss with us. And I just put myself in the shoes of the people of the companies of these responders.
I like the next time they have a breach, are they going to open up to this security person who they've just seen blasting some random stranger on LinkedIn, trying to prove how smart they are because they were able to add one and one together. It's like, we have a responsibility to the people in the companies where we work. And I guess the myth here is that cybersecurity people can have some communication difficulties.
And I'd say that one's confirmed.
[Pinja] (14:05 - 14:09)
And this comes from a security person. Absolutely. Yourself, right?
Yeah.
[Darren] (14:09 - 14:35)
I've been saying for years that communication is the underrated skill in cybersecurity. If you can't explain why, like we're not AI. AI is the only thing that's allowed to not explain why, because that's how these black boxes work.
They just, data goes in, info comes out. And I'm talking about machine learning, not large language models here. But as people, we have to explain why.
And if we can't, that's something we should work on.
[Pinja] (14:35 - 14:49)
And some of the practical tips that were in the discussions in relation to this, this LinkedIn post was the VPN obviously. And I need to mention one that the guy proposed as well. Like if it doesn't work, just enjoy life offline was of course one of the tips.
[Darren] (14:50 - 15:14)
That's actually exactly right. Like I don't know when it became normalized for people to work on potentially private company things on a train where, you know, you can have a screen protector, but someone sat right behind, you can still see everything you're typing. You can, you know, be private about it, but you're still in a public place.
And you know, not revealing those things in public situations is a completely valid defense.
[Pinja] (15:15 - 15:42)
Yeah. This also kind of creeps into the topic of how do we work? How do people work?
How do we consider our private time, et cetera. But that's still a very, very valid point. But speaking of the VPN, that's one of the things that, unfortunately, people use as a silver bullet to all of these problems.
If you use a VPN, you're unhackable basically is what is being said. And that's, I think it oversimplifies the same quite a bit.
[Darren] (15:42 - 16:03)
Yeah, absolutely right. Like, I mean, there's a lot of layers to unpack there that I don't think we need to, but VPNs are part of an ecosystem of security. They're not a silver bullet that solves every problem.
In fact, there are a lot of problems, which the encryption being used more and more actually solving and making VPNs less necessary.
[Pinja] (16:03 - 16:37)
A couple of things that we, in addition to VPN, what we're encouraged to have on our, on our personal devices, on our work devices, of course, let's talk about firewall and antivirus programs, right? So people are still feeling that it is enough if I install these things on the laptop, on my phone, they're still being marketed when I go. And if I buy a new phone, I'm being asked, would you like to have an antivirus program on your mobile phone?
And it's probably not the place where if you install a firewall on the computer, probably not the place to actually where it should be happening.
[Darren] (16:37 - 17:08)
Yeah. I actually saw this one here because I actually saw it in a client security presentation that mentioned you needed to have a firewall on your system. And it was baffling to me because it really should be part of your network, not part of your computer, unless you're hardwiring your system into the internet, where at which point it probably is, but even most of those have like network address translation now.
So they couldn't even be reached by the public anyway.
[Pinja] (17:08 - 17:32)
Yeah. So, so probably not, not putting the individual employees responsible to have that. And in those cases that you're doing some hardwiring then maybe in those cases, but most of us are not.
But how about the antivirus? And I think it is kind of, again, this is one of those silver bullet things, right? Like once you have this, you, your, all your worries go away, but that's definitely protecting you from most of the virus issues, but what else?
[Darren] (17:32 - 18:04)
But it doesn't help that company spaces are making this a little bit more fluid that they're trying to pivot from antivirus to EDR, the end point detection and response, which has things like anomaly detection. So if you are working with an EDR tool that can actually be everything you as a person need and everything else be offloaded to the network, but you need to know the difference between an antivirus and an EDR, because if you're running an antivirus, there might be room for improvement there.
[Pinja] (18:04 - 18:30)
Let's talk about one, one aspect that we, we kind of touched upon already. You mentioned the communication skills of, of cybersecurity people. So the key word here is people.
And that's one of the attack surfaces that we know of, in the recent attacks. And there are a couple of elements here, but let's start from the claim that, well, we're such a small company. Why would they be bothered by us?
Is that one of your favorite things to hear, Darren?
[Darren] (18:31 - 18:46)
I hear this a lot. I actually do it as part of a demonstration when I go to clients and they say, well, we don't need to worry about that. If you go to the website, ransomware.life, which sounds like a bad thing to navigate to, but it's okay.
[Pinja] (18:47 - 18:47)
Yes.
[Darren] (18:47 - 19:01)
It basically tracks ransomware attacks of basically a load of exploited companies and just scroll down that page and see how long it takes you to find a company that you recognize. Most of the victims are small companies.
[Pinja] (19:01 - 19:13)
Yeah. And on the list right now on their website, it says 21,901 victims on the list. So we're not just talking about fortune 500 here, right?
That's, like, actually a huge number.
[Darren] (19:13 - 19:35)
Yeah. It's, and it's like, it's kind of interesting. You can basically scan most of the internet for common vulnerabilities in a day.
If you have the means to do it, they don't care who you are. They don't even care. They don't look at your company.
They're looking for a way in first. And then when they have that, they're investigating your company.
[Pinja] (19:35 - 19:53)
And just alone on, for example, let's say financial services, 270 victims here, basically that is not a surefire way to secure your company that, oh, they must not be interested in us. They, the hackers, do not actually care what kind of company you are and who you are.
[Darren] (19:53 - 19:57)
They care if you have money and are willing to give them that money to get your things back.
[Pinja] (19:58 - 20:16)
Yep. And let's talk about insiders. So we're, we've been trained once a year in most of the companies, hopefully for how to, how to actually prevent the company from being attacked.
But there's that one fallacy that I find kind of amusing is that insiders wouldn't be a big risk compared to hackers. Why, why is that still a belief?
[Darren] (20:16 - 20:41)
I honestly don't know. Like whether we're talking about accidental or malicious insiders, they add up to a huge portion of attacks because it turns out that your standard security measures in most cases work like passwords with multi-factor authentication work. So people are usually the flaw that we are, I'm not saying we have to make sure that people aren't there.
We have to make sure people are trained and understand.
[Pinja] (20:42 - 21:06)
Yeah, that's true. And we've been talking about phishing, phishing emails quite a lot. And I know I've been part of this group as well, previously as I, of course I would recognize a phishing email when it comes my way.
Of course, I would see the bad English or Finnish or whatever the language would be. I would immediately detect that the email address is not legit or that the link that they provide me is not legit, but that is no longer the case, is it?
[Darren] (21:06 - 21:18)
No, I really miss the days of the Nigerian royal family emailing me and asking me for financial assistance. But I mean, we actually have a really good example of that just last week.
[Pinja] (21:19 - 22:15)
Yeah. There's the NPM supply chain attack from last week where it was impacted, was it two point, uh, packages, 20 packages that had a combined over 2 billion downloads a week. And this was done by a, by a phishing email.
The guy who actually came forward with this, we still tip our hat to you. He said that he got an email that said your 2FA is, is due for an update and everything looked very legit. Everything looked really not suspicious in any way.
And lo and behold, what happened? So it is not obvious anymore. Using Gen AI to create these emails is quite easy.
So the language doesn't tell you much anymore, to be honest. And let's talk about cloud. Everything's in the cloud nowadays.
And there is this one fallacy and myth, I would say, is that the cloud would automatically be more secure. And, but do we, do we actually believe this is, this is a myth? Do people actually believe this?
[Darren] (22:15 - 22:53)
There is a lot of rhetoric that cloud platforms would be more secure than on-prem systems. And given that a lot of providers can actually put resources onto this, it actually centralizes a lot of the effort and means that things get addressed faster. They get addressed more cleanly and more smoothly, but zero days are still zero days.
And the point you have a breach like that, the attack, let's say the damage is going to be much higher. So it just requires a lot more vigilance on the side of the provider.
[Pinja] (22:54 - 23:17)
And there is this one thing, one element of compliance here, like the, the belief that if you're compliant with all the security requirements, you must be automatically secure and believing going down this path and thinking that, oh, but we're, we're in the cloud is so highly regulated that because it's compliant, it is automatically secure is kind of amplifying this message. And that becomes a risk again.
[Darren] (23:18 - 23:35)
Yeah. That's a, it's a back and forth. We keep having various angles that compliance and security, the same thing.
And it's just that compliance is what you can prove. And security is what actually is demonstrated when you're attacked. And there's a fair gap there in many cases.
[Pinja] (23:35 - 23:58)
You might meet the ISO standard requirements, but this is something, because that's a bare minimum level of doing it on cloud to open source. Some are still hesitant on using open source because it's open, right? Anybody can do that.
Anybody can have their way with it and start messing with it. But the whole idea of open source is actually built upon trust of people, right?
[Darren] (23:58 - 24:31)
Yeah. The idea is transparency and literal openness. Anyone can look at code, anyone can verify code.
And there's actually, we kind of have two in tandem here because we have open sources insecure by default, but we also have closed sources better and are more secure because attackers can't see the code. And it's like, to be honest, I had that conversation about 18 months ago and it still is wedged in my head because that's just not how transparency works. And that was depressing to me.
[Pinja] (24:31 - 24:54)
Yeah. So there are still some people who say that transparency is not good, but if we think of the closed source software and because it's not visible and it cannot be verified, many say that is actually better because then attackers cannot see the code, right? So why would that, why would that be actually better if we think of let's say the VPN protocols here as a comparison.
[Darren] (24:54 - 25:41)
And let's be fair, the attackers not being able to see the code is, it does add a small amount to security because they can't reverse engineer it. But on the other side, like, yeah, as you say, VPN protocols, you have like open VPN and wire guard, the open protocols that can be actually verified and actually checked by the community at large. And then you have things like SSTP from Microsoft and Cisco's was, I think it's any connection, which are closed source.
And it doesn't necessarily make them better. It just means there may be security problems that they're smaller investigation has not turned up. So like transparency when it comes to software is a good thing.
And it doesn't mean every piece of software needs transparency, but there are advantages to having open source.
[Pinja] (25:41 - 26:05)
So we have a couple more of these and we can figure out where to put them, but maybe the first one is related to the people. But there's still the misconception going on that if you rely on technology, if you rely on your antivirus, your firewalls and everything else, the security would be about the technology and not the people. But actually the maturity of the problems come with us, the people.
[Darren] (26:05 - 26:21)
We checked into a couple of studies and it was between like 70 and 90% depending on which study and how they were looking at it. But people are the, they are the variable and they're the one that needs the most guidance through this.
[Pinja] (26:21 - 27:02)
Yeah. So again, we're talking about the phishing where we can talk about social engineering, you can social engineer your way into somebody's office. Fortunately, quite easily still nowadays, there are people taking photos of their, let's say work batches when they leave on the last day, which I find so baffling when I, when I see them, the goodbye, I'm leaving this company and here's a present for you.
Here's a photo, full photo of my batch. So all these kinds of things, but it's again, the beliefs that people have as well. So one of these beliefs is that the data that once it's deleted, it's gone forever.
So I wouldn't, I don't have to worry about it after all. And that I guess depends on how you delete it, right?
[Darren] (27:02 - 27:24)
Yeah. PC storage is not like a whiteboard. You can't just erase something and then it's gone.
It's more like it marks the storage area for reuse, but the data is still there until it's overwritten. So I think this is one that I just added because I occasionally have to explain this. And I think people should understand what happens with their data.
[Pinja] (27:24 - 27:26)
Is it like a VHS tape?
[Darren] (27:26 - 27:27)
A little bit. Yeah.
[Pinja] (27:27 - 27:28)
Or a C cassette.
[Darren] (27:28 - 27:39)
It's just, you know, you have to be able to, it just marks an area for rerecording. In that case, I was just thinking about the average age of our audience. And if we have to explain what a VHS tape is.
[Pinja] (27:40 - 28:13)
We don't know what our average age is. This is kind of like when we talked with Jamie Dobson and he said that he had to explain what life in the old days was. And he thought he was going to explain the sixties and then somebody wanted to hear about the nineties instead.
And we felt old, but if you don't know Google VHS. So another piece of maybe outdated technology by now is the USB sticks. And so you, there was a time when you went to a convention and you got a random USB stick from a stand.
It was branded with a company X, Y, Z inc. logo. And we wouldn't think twice of it, right?
[Darren] (28:14 - 29:04)
Yeah. I was actually just looking on my desk for one of my favorite toys because it's like this story time where every company's policy has don't plug USB sticks in and people don't really understand why. So I have this device that looks like a USB stick, but it's actually a USB micro controller, which runs micro Python and acts like a human interface device.
So when you plug it in, it pretends to be a keyboard and automatically types the code for an exploit because it's not downloading software. It's just keyboard input. It basically bypasses all antivirus and opens a connection to a remote server.
And it's like when people, when your IT team tell you don't plug in USB sticks, this is the worst thing that can happen. That it's a malicious stick that does exactly that.
[Pinja] (29:05 - 29:28)
So if you don't know what a VHS is, if you don't know what a USB stick looks like, please go and Google, you need to know what a USB stick looks like. So you don't stick it into your laptop. Okay.
So one last thing is that looking a little bit into the future, we've been talking about quantum computing and quantum computing chips here before in our news sections. So in the future, do we see that quantum computers would make all encryption useless just basically overnight?
[Darren] (29:28 - 29:30)
I don't know. What do you think?
[Pinja] (29:30 - 29:37)
This is a really hard one, but this is something that when we're thinking about quantum computing, we might have to rethink the security aspects. Completely.
[Darren] (29:38 - 30:33)
Yeah. There are some groups working on what they call quantum safe or quantum ready encryption. And there are some, I believe there are some prototype protocols and methods out there today, which should theoretically be safe from some level of quantum computing.
But it's such a multifaceted question because like if we talk about password change, the idea of quantum computing will make breaking of passwords so much faster that any previous leaks will be revisited and will be broken. So if we don't have forced password changes, there's a good chance that old leaks will become relevant again because they don't have quantum safe encryption applied. And like the speed at which point that could be applied is also a thing because people will be lagging behind.
So it's hard to say what will happen, just that it's an interesting and concerning space.
[Pinja] (30:33 - 31:08)
It is. And something really, something to look forward to because it might, as I said, might change a lot in the security space, how things are being done. So we covered a lot of these topics, what is actually true in the current light of what we know about cybersecurity.
Somebody in like five years might tell us like, oh boy, were you wrong! But this is what is being known now. So don't stick USB sticks in your laptop without knowing where it comes from.
And yeah, please Google the VHS tape if you don't know what it is. And yeah, don't trust the privacy screen that you have on your laptop, I guess.
[Darren] (31:09 - 31:20)
I think there's this concept in security, which is called avoid FUD, and that's fear, uncertainty, and doubt. And just talking about that and asking questions is the most important thing you can do.
[Pinja] (31:20 - 31:24)
On that note, I think that's all the time we have for this topic today. Thanks for joining me, Darren.
[Darren] (31:24 - 31:24)
It's been a pleasure.
[Pinja] (31:25 - 31:27)
Okay. Thank you all for joining in. We'll see you next time.
[Darren] (31:31 - 31:33)
We'll now tell you a little bit about who we are.
[Pinja] (31:33 - 31:38)
I'm Pinja Kujala. I specialize in agile and portfolio management topics at Eficode.
[Darren] (31:38 - 31:41)
I'm Darren Richardson, security consultant at Eficode.
[Pinja] (31:41 - 31:43)
Thanks for tuning in. We'll catch you next time.
[Darren] (31:43 - 31:49)
And remember, if you like what you hear, please like, rate, and subscribe on your favorite podcast platform. It means the world to us.
Published: