Minimum Viable Product, or just MVP, is a version of a product with just enough features to be usable by early customers who can then provide feedback for future product development. Or is it? We invited Arto Kiiskinen, Harri Pendolin, and Christian Clausen from Eficode to debate on the purpose of Minimum Viable Product.

Arto (00:08):

It would benefit the organization to really discuss what do we mean by it. And associated with that discussion about the MVP, we should be talking about the hypothesis and what kind of tests we plan to do. Or if I may go a little bit deeper on that, the tests are not something that you just do these tests and that's it, we've done the tests. It's a continuous activity. You don't stop testing. We are doing software for the product. We are going to do it for years and years to come, which means that we are continuing to have this hypothesis. We are continuing the test. So it's more about maybe the strategy of doing this hypothesis, testing, and development.

Marc (00:59):

Hello, and welcome to the DevOps Sauna podcast. I'm Mark Dillon and I'm your host today. According to Wikipedia, Minimum Viable Product, or MVP for short, is a version of a product with just enough features to be usable by early customers who can then provide feedback for future product development. Or is it? Like every central tenant in Agile software development, the purpose of an MVP can be debated with good reason too. This is what we did. We came together to discuss what is an MVP, what is its purpose, and what are teams trying to solve with their MVPs? I'm joined today by Arto Kiiskinen, Harri Pendolin, and Christian Clausen from Eficode. All of them have years of experience in Agile software development and DevOps practices under their belts. Please join me in the discussion.

Marc (01:48):

What is the purpose of an MVP? I have here today Arto Kiiskinen.

Arto (01:57):

Hi, everyone.

Marc (01:58):

Harri Pendolin.

Harri (02:00):

Hello.

Marc (02:01):

And Christian Clauson.

Christian (02:02):

Hello.

Marc (02:03):

So let's get started. Arto, this was your idea to have a podcast about MVP. So tell us, what is the purpose of an MVP? Why did you want to do this today?

Arto (02:13):

My purpose of having a few of my colleagues to talk about what an MVP is, is that I see it all the time that the term MVP or minimum viable product is misunderstood or understood in different ways in the organization, even within the same organization. So I wanted us to open up the, let's say, history of the term and the current understanding of the term and how we think it should be used. That was my purpose of inviting you guys to talk about this.

Marc (02:48):

Great. What is the history of MVP?

Arto (02:51):

I think that the term Minimum Viable Product, it started to be used in the very early 2000s, if I'm not wrong. The original concept of the term was to think about the actual product release that, okay, let's not cram everything into the product and then release two years down the road, but let's think what we should put into the product in the very minimum first release that goes out to the customers. What would be the most valuable features that we would put there and what would be the minimum functionality that is something that the users can at least tolerate or even enjoy? So I think that's where the term MVP started to be used. Then, Lean Startup happened. Maybe Harri, if you want to jump in here. Then, I guess Lean Startup was 2010 or 2011. Or when was this, let's say, the term MVP was maybe redefined, am I right?

Harri (03:54):

I'm not sure whether you're right, but that's true that I got kind of interested in MVP concept when I read the book by Chris and that was something like 2000 and fell for something like that. Because my background is in product management and that's more like understanding whether something should be done or not. I got very excited that, okay, now we have a concept that can be used to understand whether this business case is valid or whether we have a way to test whether we have the right numbers in our business case excel. For me, the MVP has always been more like a concept to test our hypothesis than the first release of a product.

Marc (04:39):

That sounds like it could also be proof of concept?

Harri (04:43):

I think the proof of concept can be MVP or vice versa, and an MVP can be also a prototype. But when you have an idea about a new business idea, right, there are different kind of concerns that you have. Maybe you have overestimated the market, or maybe you have found a problem, which is a problem for you but it's not actually a problem for a big bunch of people. Or whether you have just misunderstood the feasibility part. Those are kind of different problems and all of them can be tested with different kind of MVPs. I know, Christian, you have a different kind of ideas about it.

Christian (05:24):

Yes, I don't think MVPs are testing whether something is feasible or not. I also don't think it has anything to do with prototypes. So feasibility is a proof of concept sort of thing and I agree it should also be minimal. You shouldn't build stuff that you don't need. But where I think it differs in a major way from an MVP, for me at least, is that you should release an MVP to the users because you're testing a business hypothesis and you can only do that with real users. Whereas a proof of concept, you are testing whether something is possible and you can assert that without real users. I think a prototype is for how a design and how a feel should be of an application that has nothing to do with them. It should still use real users, of course, to test, experience, design and stuff, but it doesn't have any working software in it, in my opinion,

Harri (06:12):

But can the concept of MVP be used also for other products than test software products, right?

Marc (06:20):

Like having something tangible that you could actually sell. This is how I've always thought of the MVP. That what is the minimum thing that we can actually sell and if somebody is willing to buy it, that is the concept of Lean. Don't build anything until you of somebody who showed me the money. Kind of has the money ready. So this MVP being a release is how I always thought of it. The first public sellable release of a product. And that could be a physical product as well. Making them in the garage.

Christian (06:54):

I don't think something needs to be sellable. And the reason I say that is I think a lot of software now, especially consumer software, is free to use. And so, the selling part becomes a little bit gray area. I think more, it's about them choosing to use it voluntarily, so to speak.

Harri (07:13):

Okay, if it's not sellable and you don't sell it, you just keep it to your customers in their hands to be tested, what's the difference then between the proof of concept and MVP?

Christian (07:25):

I wouldn't give a proof of concept to users. I would use that just to test whether I can actually build something that does something. It doesn't have any usability, it should probably be called from the command line or not even called externally. It's like a big test case for me.

Arto (07:40):

If I may, the proof word, proof of concept, it's also related too, that sometimes you need to prove that you can build it, and sometimes you need to prove that somebody's going to buy it if you build it. The proof of concept can be geared towards what's the biggest uncertainty that we are going to have to test. What do you think?

Christian (08:00):

I think that's reasonable. I think the proof of concept is a technical tool and I think the MVP is their business tool.

Marc (08:08):

But I like this idea a lot that Arto raised because many times proof of concept is, can you take two dots and connect them and prove that that's feasible. What Arto just raised is that maybe one of those thoughts is the customer and the other one is your application or the problem that you're trying to solve. So I never thought of a proof of concept in that kind of terms before. But whether you're giving it away or selling it, still, what kind of goals do you put with this? Are there other kinds of goals for an MVP? So, let's give it a way a proof that you can connect the dots.

Harri (08:43):

I think always you call something an MVP, you should have some kind of hypothesis that you are testing. Whether you go call those goals or hypothesis, but you are testing something. I mean, if we just have a first release of a product, you just launch it. I understand, if there is a market already, we have a product, and we know that there are the customers, what are we testing? Maybe not. But when you have something new, some uncertainties, you need to have an MVP or you want to have an MVP. Then, you should also define the hypothesis. What is it that you need to prove? And that's, I think, the big difference between just talking about MVPs and doing MVPs, and whether you have defined before you do the MVP that, okay, this is what we want to prove or want to solve, or what are the hypothesis that we need to find out, whether they are true or not.

Arto (09:40):

It's an interesting point that you have to define the hypothesis related to MVP. Would you agree, still today in many organizations, we have the situation that the term MVP is understood as what collection of features must we have in the product when we release it. And this is symptomatic especially in organizations where the release frequencies, that you release maybe couple of times a year. And we still see these organizations, especially when we are talking about companies that do products, not only software, that are also tangible things, they don't release weekly or daily. In these kind of organizations, the concept of MVP is still the closer to the original and traditional, where it started from. And it's not really so well related to this, let's have a hypothesis or let's test things.

Arto (10:37):

So what Harri, you brought out, is that if you would start to think that with our MVP we should have a hypothesis, I think that it might actually raise a question in these older, more traditional organizations that actually, if we release these 20 things as our MVP, then we are going to have 20 or more hypothesis to test at the same time. It might actually act as a wake up call a little bit. What do you think?

Harri (11:06):

I don't think we need 20 hypotheses if we had 20 new features, for example. But the features together should give us some value or give the users some value. Whether we understand the amount of value, that can be the hypothesis. But what I'm kind of trying to say with hypothesis is that if we don't have anything to test or prove, then why do we call the first release MVP? Why it's not just the first release?

Marc (11:36):

Well, having worked with a few really talented product managers over the years, the most talented product managers tell you, "You don't need this in order to be able to sell" or "You don't need this in order to make that release." Or there's this contributing and adding value can be looked at as different things. Everybody can contribute what feature they think is important to something, but oftentimes not every one of those adds value.

Marc (12:04):

But if you're able to narrow down to what is the minimum amount of things that will add value for a specific customer or a set of customers or a set of different customer segments, if I look at this just on the face value, MVP, the minimum viable product or I like ... Arto, I don't know if it was a Freudian slip or not, but he the minimum valuable product. I think that was really kind of a revelation for me. It's like, can you sell something that adds value or not? And the more complication, the more features and things that you put in there, the more risk that you undertake in order to be able to deliver it, and like Arto said with these hypotheses, the more difficulty you're going to have to understand which ones actually add the value.

Harri (12:50):

But do you need an MVP to test all the features? Or can you do traditional user testing or something else to find out whether you should do this feature or that feature? I think that's the value of a good product manager, that he doesn't need to really use something to be able to test the value of those features. I mean, there are different kind of tools.

Christian (13:12):

I don't believe we can predict what the users will actually need or want. I read a study recently that I think Googled it, where they said that their business analysts, some of the best in the world, they were wrong with the expected outcome of an experiment in 60% of the cases and it actually had the opposite effect in 30% of the cases. I don't think we can predict what users are going to want or need or how they're going to use it. That's why I think MVP exists at all, is to remedy this, to put it in front of real users.

Arto (13:44):

That's interesting statistic. Interesting that the best analyst in the world are wrong 60% of the time. I think that's exactly, Marc, you were saying that some very good product managers come to you with, or have the answers, okay, we need these features in the product. My instant question would be based on what data or based on what test results? Are these your opinions or have you talked to two customers and that's the decision that you've arrived? These are actually the hypotheses, right, that Harri was saying. They should be tested and verified before making the actual investment of developing even those features.

Marc (14:25):

Well, I'm going to interject here for a moment. Hypothesis. Scientific method. Where does scientific methods start? It starts with research, and then it goes to hypothesis, and then it goes test. There's a pre-process here which is doing the user testing and using these kinds of really inexpensive tools to be able to understand does the user like it like this or like that, or ABX testing is one of my favorites, where you throw the dark backwards and that's the highest score. But what do you think, Harri?

Harri (14:55):

I think that we need the different kind of tools to test feasibility and usability and desirability. MVP is just one way to test. For me, again, it's a test, it's not the first release. If you come back to those 20 features, I mean, how do we really define whether they are valuable? One user can say that it's valuable. The other user say that I don't need that feature at all. And the third one says that, yeah, it's a nice feature, a great feature. Thanks to who developed it, but he never uses it. And the fourth user says that, okay, yeah, I'm using this feature all the time, but I wouldn't pay for it. If we don't have any kind of a threshold for valuability or how valuable something is, then how can we know that whether it's valuable or not. That's what I'm meaning by hypothesis. That, okay, if we release something, those 20 features, for example, whether we get some traction or whether we can raise the price, or whether anybody buys the product or not.

Arto (16:05):

I agree with that, Harri. What I'm thinking right now, is this different for small companies or startups and large companies which are more mature, where the market is more mature? Why I'm saying that is because a startup is by definition working with very, very small group of people, early adopters, and they have the freedom of changing their products drastically and fast, and that's expected of them. Now, it's possible for them to do maybe even aggressive experimentation. But would the same laws apply to when a company is big or has a, let's say, more dominant position in the market? What do you guys think? Is the same use of MVP as in a startup possible, or does any part of it change in a more established player?

Marc (16:58):

Maybe I can agree with Harri and disagree at the same time. I agree, Harri, that MVP is not the first release. I think it's every release. I think that one way to look at this is how quickly are you able to go from? So, if you look at a development value stream as an example, you've got talking with customers, getting customer feedback, maybe aggregating feedback from customer service or from whatever your CRM is and your ticketing systems. And then, a developer gets a ticket and they write some code and they commit it. And then, it goes through code scanning and test automation, and running all the unit tests and system tests and QA tests and UAT tests. Finally, it gets deployed and then the customer touches it. So there's this huge string there that has to happen every time.

Marc (17:56):

And there's this thing called value creation, which is that moment between the developer taking the ticket and committing the code. But all this other stuff has to happen. The faster all of that stuff happens, the faster you understand, did that developer add value or not? Does that little feature add value or not? So, one commit of that work is a minimum viable release or a minimum valuable release. So, when I look at SAFe teams that have two-month program increments and they're making releases every two months, is that the minimum viable release that they could be adding value to that product? I mean, two months worth of work? Harri kind of made me think of it in a different way. It's like, MVP could be every release.

Christian (18:40):

I mean, if we're saying that the minimal viable part binds to the product and not to the addition, then, I mean, the product wouldn't stay minimal for very long. Hopefully it would stay viable at least. For me, it's important to just sort of think of why we have MVPs, why we do them at all. I think it's a natural extension of the Agile movement. I like to define it as seeking for cheaper experiment. And so, what is the cheapest way we can test if something is valuable to our users? If something is worth building? Well, it's giving it to users, right? Building as small as possible, and then giving it to someone, and seeing if they're going to use it. If they are, then cool. We can add more stuff to it. I don't disagree that we should have small batches, of course, but that's sort of for a different reason, I think, in my head.

Harri (19:26):

Coming back to Arto's question, whether there's a difference between startups and big companies, what Christian just said that, okay, we can build something and then see whether users start using it, and if not, then build something on top of it, is different than what startups do. I mean, if somebody doesn't use it, then you scrap it, right? You don't just build something on top of it, but, okay, this was a mistake, we kind of pivot and do something else. Why is that? It's that big companies, they usually operate in a mature market. They know fairly well what the customers do and what they want. Startups are trying to create something new. A new market, a new way to solve the problem, something new to get to the market. And for them, it's more like testing whether the problem is big enough, whether their solution is good enough. There are more uncertainties than what the big companies with the existing business models and markets have. That's why I see the kind of need and the way to think MVPs are different in these two cases.

Christian (20:38):

What I meant to say before was not that you should just add something on top if it wasn't useful. I would still scrap it also if I was an enterprise. If it isn't useful already, there's no need to have it in the final product, whatever the shape the final product has. So just like startups, I think big companies should scrap things that aren't working for them.

Arto (20:56):

I agree totally with that.

Harri (20:57):

But what is the final product if you anyhow released?

Christian (21:01):

Yes, right. There is no final. Sorry, I shouldn't have said final. Software doesn't stop being developed. It needs maintenance. It needs extensions. The world tends to change and the software is the model of the world, so there's no final.

Marc (21:13):

But Harri, on the bigger company side of things, do you think that there's an MVP mentality that might help them overcome the opportunity cost of things? Many companies are afraid to change because they have a gravy train of products that are going through and making money. They're resistant to be able to change because that would take time away from what is already making them money. Do you think that there's an MVP opportunity for those guys?

Harri (21:44):

For sure. You know that as well as we all know that changing direction is difficult and proving that there is an opportunity with this completely new product is difficult. I think MVP could be a tool for that. You could do a testing with the defined hypothesis and show that the business case that you have prepared is viable.

Arto (22:10):

Most recent example of this is that all the big automotive companies, disregarded electric vehicles all together. Okay, what's happening right now? In a few years' time, Tesla will conquer everything. Also, Chinese EV manufacturers will basically rule the world over. Then, the sort of big US car manufacturers will disappear in few years. This is something that they compared, the small EV market to the current ice engine market and they decided not to go there. The example, this kind of thing happens all the time. Nokia is another example. Kodak is another example. The world is filled with these kind of pitfalls that the large companies have made. Understanding the MVP better in the big companies is what is needed.

Christian (22:58):

I think generally two things preventing big companies from executing MVPs effectively. One is falling into the trap of testing too many things at once, like testing a technical challenge. Then, they're like, okay, we can actually build this thing, and then thinking, this is now also an MVP. And so, they ship it without it being actually built to be shippable, which is also why I like to make the distinction between technical hypothesis and business hypothesis. The other reason is that they build frameworks or build organizations where the lower bar for how small and experiment can be is set very high so you actually need to do a lot of stuff. I've worked with customers where there was a rule that every team had to have a scrum master and a product owner. And already you've prevented yourself from having just one developer sit down and do something new because you need at least three people now to start a team, which makes it a lot more expensive already.

Marc (23:49):

I think there's another interesting thing that kind of came up with you guys today in terms of defining an MVP or a proof of concept or a proto. The problem that I see is like when Agile started and companies didn't really wholeheartedly adopt it, they did a bunch of strange things that weren't Agile, but they called it Agile. And some people for a long time thought that, well, Agile means make crap but make it fast. We all know that it's not that. Now, Agile adoption is pretty much everywhere and it's the standard way to do software. But on the MVP side, it's also, there can be this perception that it means, okay, let's just get some hacky thing out as fast as we can. And in software, there's nothing more permanent than temporary. So, how would you guys address scoping the quality of an MVP?

Arto (24:40):

If I may go first, I think the quality is tied to ... I think Christian, you said earlier that what is the cheapest way we can run this test? If we consider to be a test that tests a hypothesis, what's the cheapest and fastest way to test it? That's the step one. When we get the results, then we can actually have this question of, okay, what do we need to build now that we know?

Christian (25:06):

I think quality-wise, it differs from a proof of concept because in an MVP I would actually build it to be ... Well, first of all, quality can mean many things, right? It can mean highly performant, highly scalable, easy to maintain, looking great, feeling great, expensive and luxurious. It can mean so many things. For me, the quality would be in the maintainability. At least that's where it differs from some of the other ones, because I think an MVP should be maintainable because our expectation is the business hypothesis will hold and will have to add more features to it. Whereas for a proof of concept, I would throw it away after I'm done validating whether something is possible or not. So I wouldn't build any quality into that one.

Harri (25:45):

Yeah, I think what we have kind of forgot, at least if you define the MVP the way that they agree it did, is that what we are trying to achieve actually with MVP is learning. So we have this learning cycle. Build, measure, and then test, and then you do something else. And really, it's not about anything else than learning so that you can then define what is the product that you should or how you should do it, or what is the problem that you should solve?

Arto (26:15):

I think that for many organizations ... I agree with you, Harri, what you said, but for many organizations, the problem starts already there, that the product managers or the organization they think they know already, so they have no incentive of doing these tests.

Harri (26:35):

Mature organizations, you mean?

Arto (26:37):

Yes, mature organizations. I'm not talking about startups, although that probably happens to startups as well. But they find out lot faster because they run out of money. This was actually one of my original thoughts of wanting to talk about this subject, is that, okay, first of all, MVP is still understood differently in different organizations. Some understand it the way Eric Ries in the Lean Startup defined it, but some still use it as the original idea of minimum viable, okay, what can we get so that people will still buy it.

Arto (27:09):

But also sometimes start to think that some organizations still see MVP, and here I'm little bit joking but not joking, that they see this as maximum viable product. What I mean with maximum viable product is that when they have decided that they release, let's say three months from now, then a lot of forces within the organization, start to think that, okay, three months, what can we cram into that release? So they try to overstuff it, because that's the release opportunity, which is totally against everything that we've been talking about. But I claim that this kind of mentality still exists. What do you think?

Christian (27:54):

I agree, it exists. I've experienced it quite a few times. It's infuriating because it feels more like a deadline driven development than anything. We have all these nice experiments and all of these nice ways of working and it doesn't work if you just overload them completely. Software needs to be developed at a pace that we get that's sustainable, first of all, and that we can sort of guarantee all of the reasonable checks and balances are made and at a reasonable pace. Just going for deadlines and cramming in extra stuff all the time can completely destroy something like it. Like the test for business viability, right? If this is useful, you can actually add features in a way that makes other features not useful, which sort of destroys the whole point. You don't know what is useful and what is not. You could remove something and make the product more valuable at that point, it screws up everything in this context.

Marc (28:40):

Kind of like the classical Apple approach, where they added features to all their products, very, very, very slowly, but made sure that the first few features that they had worked really well and were intuitive and easy to use.

Arto (28:55):

Christian, like you say, infuriating, that this maximum viable product approach happens. Even more infuriating that they still call it an MVP. They call it a minimum viable because they claim that, yes, we know this is what is needed. We can't sell it without these things. So somehow I get the feeling that there needs to be some sort of paradigm shift or thinking shift in this. And this is a problem in more mature companies. It's more of a problem. There needs to be some thinking shift in the product management in these type of companies,

Christian (29:31):

Honestly, though, it is difficult to discover what proper minimal viable looks like for a given product. I mean, if you're doing a web platform, you may need to do the central feature that you want to build some, that you want to release. But you also may need something like a log-in thing and you may be thinking, what about single sign-on, and all of these other things, and where does the limit actually go for what you can reasonably release. How secure does it need to be? How performant does it need to be? All of these things are difficult. I mean, I would never say that it's easy to decide what should go in an MVP, but it's easy to see after the fact that there's too much, if it's sort of a maximal viable.

Harri (30:07):

I fully agree with you, Christian, that that's, perhaps, the biggest problem with MVP. It's just something that you can control there. Okay, let's do an MVP. And perhaps the reason is that you don't know what are the reasons. Then, by calling the first release MVP, you think that, okay, then you can do whatever because it was just an MVP. You either load too many features or not that many features because you don't have a hunch that what's needed. But because you call it MVP, you think that it's okay.

Marc (30:40):

Or completely sacrifice the quality and put it to production forever anyway.

Arto (30:46):

And Christian, continuing on that front. I think that there are situations where making an MVP is more difficult than others. I have seen and I have been in a position, for example, if you are trying to replace a legacy product that has existed for, let's say, 20 years or 15 years and it's in wide use within the customer base and you know that you are having a hard time keeping it up to speed, you have to replace some part of a technology or sometimes even build a completely new product to replace it. But the problem is that a lot of people are using the features and you might have thousands of features in there. That's a really tough problem to crack. What do you do? How do you start to replace that thing? You have to build the same thing again. Do you build everything? If you build only part, you won't have people joining on the new product. I think that's the toughest situation where to start thinking, okay, what should be the first release?

Marc (31:49):

Like in replacing a legacy system. Where to start? Is that?

Arto (31:53):

Yeah.

Christian (31:54):

I've been part of sort of lift and shift transformation, migrations, whatever you call them, where we have some large system and we can't pull pieces out in the way that we would like because they're changing some fundamental part of the architecture or they're changing the whole tool altogether and we need to do a lot of plug-ins or something like that, and it comes with so many headaches. But I haven't seen it done so well that I would consider it a solved problem even. It's just difficult. You could build some of the features enough to be useful, and then, you could sort of degrade the quality of the first original product, the old one, and then see people, they should be more inclined to change on their own because the first one is getting worse and the new one's getting better.

Christian (32:35):

But I just think people don't like change. I know for myself that when I see there's an update on my phone, I'm like, "Oh no, this is going to break all my settings again." We have this feeling that moving and changing our tools are going to be painful. So I think it's really, really, really difficult to replace a legacy system that people are using. It also changes the whole scale of what you need to build again. Because if you have no users, you can build something without security, without performance. It doesn't matter if there are zero users or one user or five. But if you have 500 users using something, then you need have it secure from the beginning. You need to have it performant from the beginning. So minimal viable becomes a totally different thing.

Arto (33:10):

The thing to consider is when you try to replace a legacy system, then start with a totally new user group or user segment or market, because then you wouldn't have that problem. They are new. They don't have the expectation. They don't have this fear of change. And then, once you maybe go over a threshold, then the new product becomes interesting for the users of the current product. And do it a segment at a time.

Harri (33:35):

Now we have a plan for it, but do we have any hypothesis? I mean, how do we know that the plan that you just described will work if we don't have any kind of measures for it. And that's, I think, kind of the concept of MVP. Okay, we have a plan, let's do it this way. If we do it this way, then what will happen after the first phase? Can we get the new users to use the product? Then, the second phase is something else. For all the phases, we should have some kind of target, whether we call them hypothesis. Then, how do we test it? It's another question. But that's boils down to the product strategy and perhaps MVP can be the first phase of the product strategy. That, okay, this kind of plan we have, and it requires that we get these many users, for example, with this new product.

Christian (34:27):

Regarding your measures, if we're looking like at something such as usability, people will have a way to solve their current problems already because they're going through their lives. They're not stuck on some problem usually. And so, when you're releasing some software that's an MVP you can actually see, does this make their lives easier or harder? And if it makes it easier, they're going to use it more and they're going to recommend it to people. So I think naturally you can sort of, there is a measure built in if you're solving an actual problem for someone, and if you're not creating a completely new sort of demand out of nothing.

Christian (35:01):

I still think that if we're doing something like a migration of a big tool that somebody uses internally, segregating users will still mean you have to take someone and not put them in the most productive state because the current, most productive state is the old system. So you either have to force them to do what was before that, some manual processes or something like that, and then give them the tool, and of course, it's going to be viable compared to whatever it was before. The real question is: is it better than the old system was? And it makes it even worse that some reasons for migrating away from a legacy system is cost reduction. In which case, it's not given that the new system will actually be a better experience in any way. It just may be a lot cheaper. So that, we have to actually worsen the experience for some of these people; which means, we can never do an MVP. It will never be a positive outcome because the intention is for it to be worse, but also cheaper.

Marc (35:56):

We have a couple of minutes. Would each of you like to give kind of a last words?

Arto (36:01):

Ok, I can go first. What I think we've established here that the term MVP is still difficult and it would benefit the organization to really discuss within the organization that what do we mean? Let's not just use the term MVP without discussing what do we mean by it.

Arto (36:22):

Associated with that discussion about the MVP, we should be talking about the hypothesis and what kind of tests we plan to do. Or if I may go a little bit deeper on that, the tests are not something that you just do these tests and that's it, we've done the tests. It's a continuous activity, right? You don't stop testing. We are doing software for the product. We are going to do it for years and years to come hopefully, which means that we are continuing to have these hypotheses. We are continuing the test. So it's more about maybe the strategy of doing these hypotheses, testing, and development. And that's something that should be discussed within the organization, that everybody is on the same page.

Christian (37:08):

So I think software is about learning and it's going to be always about learning. We're learning what's usable, what's valuable, how to build stuff, how it should be maintained, how the architecture should be. All of it is learning. And so, I think it's important that we keep these experiments fairly clean. And so, I'll just leave off with the two pieces of advice. Don't release your proof of concept and always release your MVPs.

Harri (37:33):

Ok, I fully agree with Arto that the term is difficult. It gets a lot easier if we just agree what the term means. And it can mean one thing in one company and another thing in another company. Or even in different situations, the MVP can be different. So that's a good way to start and to understand what MVP is. Then, the second point is that we shouldn't use MVP as an excuse. Always, when you do an MVP, you should have a target for that. What are we testing? And then agree on it. Don't fall into being trapped where we just did an MVP and because it was an MVP, let's do more features. But decide on the results of the MVP, that what to do next. Scrap the whole product, scrap some features, rewrite features, whatever that is.

Marc (38:24):

Excellent summary, guys. I don't feel the need to add anything other than I learned a lot of different points of view about MVPs today that even in my experience I had not seen before. But I'd like to thank our guests, Harri, Christian, and Arto. Thank you, and looking forward to the next DevOps Sauna podcast.

Marc (38:47):

Thank you, Arto, Harri, and Christian for joining. You can find the links to their social media profiles in the show notes. If you haven't already, please subscribe to our podcast. Give us a rating on our platform. It means the world to us. Also, check out our other episodes for interesting and exciting talks. With that, I thank you again for the great company. I say to you, take care of yourself and remember, deliver value from your software.