At Eficode we’re always on the lookout for anything new in the DevOps scene. Recently, a few of us went to Madrid to check out Microsoft’s latest OpenHack event.

 

Microsoft really has gone all in on DevOps, as demonstrated by Azure DevOps and the DevOps OpenHack. But what exactly are these OpenHacks? I had to find out and I brought a few colleagues along with me to investigate. 

Open Hacks are organized by Microsoft,  sometimes in conjunction with partners. Some are open and some are by invite only.

They are developer focused, so if you are not a technical person then they are probably not for you. The events are designed to mimic some real world examples a software developer would encounter. They provide hands on, problem-solving scenarios that you have to complete as a team. They are based on Microsoft technologies and pretty narrowly focused on the Azure cloud. 

For obvious reasons we attended the DevOps OpenHack, but there are many more OpenHacks with different focuses. For example, there are OpenHacks which focus on containerizing an application and moving it to the cloud, OpenHacks which focus on serverless functions, OpenHacks which focus on a lift and shift of Microsoft workloads to Azure, and quite a few more. 

To get an overview check out openhack.microsoft.com.

The DevOps OpenHack

In its basic form the DevOps Open Hack is focused on the deployment of some API based microservices to an AKS (Azure Kubernetes Service) cluster. The idea is to use DevOps practices to do this and achieve zero downtime deployment.

You mostly have to rely on Azure DevOps to solve the challenges. In practice you have to create continuous integration and continuous delivery (CI/CD) pipelines to automate the testing, deployment, monitoring, and rollback of the microservices. You do not need to worry about the AKS cluster as it is already spun up for you. You can just see it as a black box. 

You do have some freedom when deciding what technologies to use though. For example, you can choose to use GitHub or Jenkins for the pipeline implementations.

You should probably have some knowledge of containers, continuous integration, continuous delivery, and Kubernetes, as the microservices are containerized and deployed to a managed AKS cluster. 

It would also be advantageous to have some familiarity with Helm as you will be deploying the microservices with Helm charts

 

Our experience

The structure

On day one we showed up at the venue, signed in, and got a nameplate with our names, company, and which team we had been assigned to. They had organized tables for each team and each table was marked with a team number. There was a 40 inch screen for sharing, extension cords for power, a white board for discussions and brainstorming sessions, and instructions for issues like wifi and initial links to necessary information.

The teams ranged in size from four persons up to a maximum of eight. Our team consisted of myself, my two colleagues, and one other developer from a company located in Madrid. Some of the other teams were slightly larger, but most seemed to be between four to six. 

Each team was assigned a coach. The coaches are either from Microsoft or one of the partners. They have all been to at least one previous OpenHack and understand the different challenges that the team will face. They are not there to tell you what to do, but they will  guide you when you hit a blocking issue, or are headed down the completely wrong path. There are different ways to solve the challenges and the result is what is important. 

As you would expect, there was an opening talk introducing the event and giving guidance for the coming days. After that the team and coach were in the driver's seat for the rest of the event.. 

Following the intro, the coach guided us to a portal for the different challenges. Each challenge contained a description, including links to important information and the pre-existing code for the microservices, along with some success criteria for completing it. 

The team had to read the material and come to an understanding of the challenge, define the work items, and implement a solution that fulfilled the success criteria. The coach would then review the implementation and approve it. Once the challenge was approved a new one was unlocked in the portal and the process would begin again. The challenges often built upon the previous one and generally increased in complexity.

The challenges

I will not go too much into detail about the challenges as I do not want to give too much away and spoil it for anyone who decides to attend the OpenHack. But I would like to briefly touch on some of them.

The first challenge was about defining how we were going to organize ourselves and work with the technologies and tools we would be using to solve the challenges. 

These were issues like who would work on which microservice, whether to use GitHub, Jenkins or Azure technologies, etc. 

We were lucky because there were four different microservices and we had four people in the team, so everyone was able to get the same hands on experience. 

We were pretty set on using Microsoft Azure as much as possible, so our baseline for everything was Azure DevOps.

  • Boards for work items.
  • Azure Repos for a git based version control system. 
  • Azure pipelines for CI/CD as we all had pretty extensive experience with Jenkins and were eager to use all the Azure based technologies we could. 
  • Azure container registry for the docker images.

We also had to decide which implementations to make to support our DevOps practices. For example, we wanted everything automated, everything as code, automated tests on the microservices, etc. 

Even though this challenge was pretty simple it was very important for getting up to speed. As we were mostly pretty experienced in DevOps we had a tendency to want to jump straight into the details. I suggest you take your time here and use the first challenge to familiarize yourself with the process and the codebase.

The following challenges were more implementation focused. As you would expect from the description of the OpenHack, these mainly had to do with deploying the microservices to the AKS cluster. 

  • Simple deploys with build pipelines to implement CI/CD. 
  • Implementing automated tests with reports in the pipelines. 
  • Using pull requests and policies in conjunction with the pipelines.
  • Creating and using release pipelines, which are conceptually different to build pipelines in Azure DevOps. 
  • Doing Blue/Green deploys. 
  • Adding monitoring. 

During the third day we had completed the most important challenges and received a badge in acknowledgement.
DevOps
There were still some optional challenges which we could continue to work on, so there was plenty to do and learn during the three days.

If you are going to take one thing from this blog post and our experience with the DevOps OpenHack, I hope it is this. 

Carefully understand each challenge and the success criteria. Focus on the success criteria and restrain yourself, especially if you have been doing DevOps for a while, from over-engineering the solution. The challenges are designed for you to get some core competencies within the Azure DevOps offering and you really want to get through the most important, required, challenges. 

Return On Investment

So, what did we get out of attending the DevOps OpenHack?

First of all this is not about AKS. The cluster and all environmental dependencies are pre-prepared for you. You don’t have to worry about them and you shouldn’t. If you have to look out of curiosity it should only be for a cursory understanding of what the cluster looks like.

If you have never touched Azure DevOps this is a great way to get started. You get some hands-on with Azure boards, repos, pipelines, pull requests, policies, container registries, monitoring, etc. 

If you have used Azure DevOps then this could enhance your usage of it. For example, have you implemented a Blue/Green deployment on AKS? You will at the OpenHack. The challenges build up so that you get around to using most of the core features, and you may discover something new. Our team member from Madrid attended for exactly this reason. His company is actively using Azure DevOps.

What I found to be the most valuable was the Azure pipelines. I have done many pipelines with Jenkins and found it quite interesting to see what Azure pipelines could do and how they were, conceptually, implemented in Azure. You will definitely get your hands dirty here understanding the concepts and structure of pipelines. There are triggers, resources, and standard Azure pipeline tasks like Helm and Docker tasks.

Opinions and views

Working in a company like Eficode, where we specialize in DevOps, it probably comes as no surprise that we have strong opinions. We probably wouldn’t be very good at what we do if we didn’t. 

Clickety click

One of the goals we defined in the first challenge was everything as code. This was not really realized during the three days. We spent a fair amount of time doing a lot of UI clicking during the challenges. You know clickety clicking around the UI to set up policies, create pipelines, etc.

Anytime I am forced into using the UI, I get a little bit of a chill running down my back. I start having flashbacks to scenarios where someone, and we don’t know who, changed something and everything is broken. The more pipelines you get, and the more complex they are, the more difficult this type of scenario becomes. It’s much better to look at the history in your version control system in this situation.

As an example, the normal process during the OpenHack was to create the pipeline from the UI and then export the YAML and commit it to the repository. This did give the benefit of a platform aware editor with features like suggestions, auto-completion on tasks, and being able to search for tasks like the Docker task from the browser. However,  I sometimes got lost in the UI and spent what felt like too much time trying to figure out where to do what I wanted to do.

If you created the YAML first then you needed to commit it, create the build pipeline, and point it at the YAML file. This highlights a bit of a disconnect between the platforms perception of the build pipeline and the YAML implementation of the build pipeline. While solving the challenges at the OpenHack, it simply wasn’t possible to not use the UI when creating a pipeline, either a build or release pipeline. 

To be fair, we did not use the multi stage pipelines, still in preview mode, which could have reduced the UI clickity clicking. Maybe that will come into the OpenHack challenges when multi stage pipelines leave preview. Overall, I would probably like a little less of a UI driven approach and more of a YAML based approach to the challenges. 

Pieces of the puzzle

I see Azure DevOps as a technology that pulls together a large number of the pieces of the DevOps puzzle into one platform. I really like this and believe it can simplify the overall experience. 

Coming from the background of using a lot of open source tools for implementing DevOps practices, it is pretty clear that the integration between the different pieces has been a focus of Azure DevOps. 

Basically, some of the glue is already there,  allowing you to focus on value instead of configuration and integration. For example, getting all the features implemented in a release to be visible in the release pipeline was a snap. 

The flipside is that with the one platform approach there may be some constraints on how you can implement the process. For example, there is a clear distinction between a build pipeline and a release pipeline in Azure DevOps. Conceptually, I am not sure I agree with this distinction. But that is a discussion for another time. The immediate impact of this split is that it affects the way you implement your DevOps practices. 

Overall, my opinion is that there are some advantages to be gained from a platform like Azure DevOps, if you are comfortable with it constraining some parts of your process. Furthermore,  you can always decide to plugin and glue other pieces of the puzzle, like open source tools, to Azure DevOps if you need more flexibility, or if the platform can not provide everything you need.

Summary

All in all I would say that the DevOps Openhack was a very positive experience. Whether you are new to Azure DevOps, or maybe have some experience and want to dig in with some developer-centric real world use cases, I think you would find it valuable.

Finally, I can say that Eficode will be partnering with Microsoft to co-host some OpenHacks in the future. We are a Microsoft gold DevOps partner and want to help our customers and get the most out of our partnerships.

 

DEVOPS 2020 will bring together the DevOps community.  Get tickets

 

Published: Mar 11, 2020

Updated: Mar 25, 2024

DevOps