Minimize Risk with Continuous Integration (CI) and Deployment (CD)
Ahoy there. Continuous shipping: a concept many companies talk about but never get around to implementing. In the first post of this three-part series, we discussed the use case for continuous shipping. Let’s move on to part two: the integration and deployment stages of the continuous shipping process. Part three will wrap up the series with a look at the monitoring and feedback phases. All aboard that’s coming aboard.
Continuous shipping is a shortened feedback cycle that allows teams to minimize risk, increase productivity, and (ideally) improve customer sentiment. For context, compare continuous shipping to longer, more complicated release cycles that involve months of development and back-and-forth with QA teams. Although many factors of continuous shipping act in contrast to this longer cycle, the key difference is quick iteration.
The continuous shipping process is a combination of integration, deployment, monitoring, and feedback. In this post, we’ll cover the first two elements: integration and deployment.
Step 1: Integration
As we work toward modernizing development, we should always ask ourselves how we can do our best work. Running our own servers or building our own monitoring every time is not sustainable. Instead, we want to find the best solutions and piece those solutions together. Unfortunately, ensuring these tools play nice with each other is a rigorous process. One thing that makes the process easier, although not wholly pain-free, is writing high-quality tests. Sentry (the company), for example, does a lot of testing; yet, we still ship bugs every day. Thankfully, we have Sentry (the product) to immediately catch those bugs.
One of the most significant challenges of integration is overcoming and accepting that the process will be rigorous and time-consuming. It’s easy to say, "Oh yeah, I tested my code,” when you have a very small app. As soon as your change is affecting a large, complex application, you’re not going to have clear insight into what’s happening. A lot of what we build at this stage is meant to future-proof code against the inevitability of downstream issues, based on how we, as humans, want our software to work. We rarely write tests to confirm that the code we’re currently developing is correct. Instead, we write tests for when someone in the future changes that code we’re developing today. When the future change does occur, the test will fail at the right time and prevent someone from causing further problems.
Ultimately, a successful integration phase comes down to a change control process. In other words: the process that allows us to do our best work while making minimal interruptions to everything else. A proper control process looks something like this:
Propose a change (i.e., a pull request), and outline the issue.
Peer-review the change, which can take the form of design feedback or use case suggestions.
Verify the change via automated testing. This verification tests if the code is good/valid/correct and is where we spend the most investment in engineering resources.
Determine whether this can be merged.
When it comes to the actual tools that you weave together, you want infrastructure that’s going to run whatever you tell it to run. Newer SaaS services try to be drop-in without steep learning curves, allowing you to quickly hook up and go. Travis CI and GitHub are great tools for working with open-source. You may also want to look into Jenkins, Circle CI, Codeship, and GitLab.
Step Two: Deployment
Continuous shipping underlines the importance of self-serve deployments, which gives teams control of their projects. Five years ago, a lot of companies had a designated release manager who would press the deploy button. Other companies had daily check-ins to see what changes would go out that day. In both scenarios, someone needed to physically be available to sign off on those changes.
Individual contributors should be entirely responsible for their changes.
These processes (thankfully) disappear with continuous deployment. Instead, individual contributors are entirely responsible for their changes. If you’re on a team that is responsible for API, for example, you should be able to manage the API yourself. For the last 15 years, large companies have created teams that build a platform layer with the intention of letting teams run their own tests, deploy their own code, and monitor their own code. However, smaller companies are now adopting this process as well.
The deployment phase of continuous shipping fosters builds that are repeatable. When building software, there’s often reliance on a mess of dependencies that aren’t well controlled. Unfortunately, if a version changes or a dependency’s dependency changes, the app built with that software might not work at all.
As mentioned before, part of continuous shipping is risk minimization. In the deployment phase, minimizing risk comes in the form of a rollout strategy. While this process can be challenging, the effort pays dividends. Rollout strategies often don’t exist at small companies, but they should. The simplest version of a rollout strategy is deploying to a staging environment first and verifying that your code is working correctly. While this method is fine, it doesn’t scale and isn’t entirely useful. Instead, a slow rollout process is often the more appropriate approach, where the change is rolled out to small percentages of your customers, and you can back out if something breaks.
Do not allow anyone to circumvent the required tests.
Another contribution to risk minimization is blocking deployments that haven’t passed verification. Do not allow anyone to circumvent the required tests. Many companies have compliance regulations that prevent changes that bypass the change control process from deploying. While this is an easy rule to set, it requires diligence. Again, the process is time-consuming, but the result is happy teams and end-users who get to use your app exactly how they expect to.
Unfortunately, the deployment phase of continuous shipping lacks technological resources. While tools such as Firebase and Heroku are great for side projects, these tools don’t fit with the requirements mentioned above. You’ll need additional control for larger projects. Although some companies choose to create their own tools for this phase, tool creation isn’t necessary. Products that act as Infrastructure-as-a-Service might be the way to go: AWS, Azure, Google Cloud Platform.
Next time, we’ll round out this series with part three, where we’ll focus on monitoring and feedback. Until then, discover how Sentry moves you closer to continuous shipping with seamless integration into your favorite apps and services.