- PagerDuty /
- Blog /
- Uncategorized /
- Continuous — Build, Break, and Fix Fast
Blog
Continuous — Build, Break, and Fix Fast
This is one of two PagerDuty posts on Continuous. Check out our first one: Are You Ready for Continuous Deployment?
Continuous Overload
If you pay any attention to modern software delivery conversations, it sometimes feels like you are being beaten over the head with a Continuous magic wand. Continuous Integration, Continuous Delivery, Continuous Deployment, Continuous Documentation, etc. The idea is so easy that it’s frustrating: Go fast. But the benefits of going fast are well beyond more builds to your user base. Perhaps the greatest value is long term, and hidden in how you can break the application continuously without fear because you can fix it continuously as well.
What does speed get you in the long term? Over a period of three months, can you say something more about your pipeline than “We had more builds”? More releases are one thing. But a delivery chain that does not support innovation is nothing more than a way to get from point A to point B. To demonstrate real success in a DevOps-driven organization, you need to be able to show that your software quality and functionality increases as well — which means all that cool functionality sitting buried in your backlog finally percolates to the top.
Why Continuous?
What Continuous affords us is the ability to break our applications with confidence — because we know we can rapidly alert on any issues, and iterate to a new build to address them. Teams can now implement a feature that they have been dying to implement but have avoided due to perceived risk.
What break/fail fast really means is that you can be opportunistic about your functionality, which quite possibly is the only way to build functionality that responds to user demands in near-real-time, or to learn how new functionality impacts the application and adoption. Without fail fast, applications may be doomed to purely linear releases, no matter how short your sprints are. And they can fall into the same trap we are all too familiar with — stagnation, also known as the eventual slowing of new functionality in favor of very small changes to existing functionality. When this happens your application gets stale, and can only be addressed with major rewrites, refactoring, or whole new applications. This is not Continuous at all, or at least not sustained Continuous.
Canary Releases
The extreme of fail fast and fix fast is something called canary releases. In a canary release you have one or more new releases of the application in parallel to the existing production version. Once the release(es) are complete, you divert all or sub-segments of traffic from production to the new release environments. The purpose of sub-segments is to do A/B testing on slight variations of the releases.
If anything goes wrong with a canary release, you will know quickly with a robust alerting mechanism. Then, you’ll revert traffic back to production. The exercise might have a small negative impact on your users, but the response is so rapid that it seems like a glitch. New functionality could be developed within day-length time frames.
Because you’ve learned more about the new functionality, you can either drop it, or fix it within that canary release, and rapidly test again. This truly is an iterative model. And it can be set up as not just one iteration, but rather multiple iterations going on in parallel.
This model would also be considered the extreme of continuous deployment, though without some of the automated functional and system testing before releases. These processes can be too long to support the concept. It does assume a few things. The biggest is that your application has a high enough user base and volume to support quick tests of new functionality.
I have not figured out how this can work with a highly distributed microservices application, but I am sure it is possible, if you have:
The Tools to Get It Done
Of course, such an advanced way of looking at both releases and application architectures requires great tooling to execute it. And the top three tools for fail fast are as follows:
- Release Automation: Your release automation tool needs to be able to handle several releases at once. They all do, but that is not what I mean. Support of multiple parallel releases requires very good state management, and dashboards to visualize releases. Without this visibility, it’s very difficult to know what releases are where, and when they are reverted, which can result in serious problems. And the additional overhead might not make the process worth it.
- On-Demand Cloud Environments: The infrastructure to support this is not about power, but flexibility. Platform as a Service (PaaS) is the most suitable form of infrastructure to support Canary releases, because with PaaS, you are provisioning against a pool of resources, not actual VMs. This makes provisioning faster, but also easier to manage because you do not have to worry about orchestration, etc. Most PaaS environments also support easier traffic swapping. With Infrastructure as Service (IaaS), you will need to control this via your DNS or load balancer, which likely is one additional step. However, there is no reason IaaS should be excluded from break fast processes, especially if you leverage container technology like Docker, which makes it nearly as simple as PaaS. IaaS or PaaS the developers need to be able to spin up and tear down as many environments as they desire, and do so on demand. If the attainment of environments is gated, there is no way to achieve parallelism, or to respond to issues with a new, updated release. The processes i’m pitching require full-stack deployments, so deploying on existing environments is also not an option. With such a rapid release and revert process, it’s very easy to accumulate variables that would contaminate persistent environments.
- Alerting: Logging your environments is one thing. It is important to implement logging, but mostly for historical data. However, in canary releases, historical is too late. The responses to what happens to each build need to be quick, and the lifespan of an iteration is very short. So you need a very strong alerting platform that can push alerts to you as they happen. However, the alerting platform must be smart as well. Because of the frequency and number of releases in parallel, too much information can easily become a problem; thus, response to any issue requires a lengthy filtering process.
It Is Not All Technology
The concepts around releasing faster, breaking faster, and fixing faster are not complex. But implementing them in existing environments can be. And this is where any experienced developer, operations and QA person knows that you cannot simply turn on the canary release switch.
Implementation is a journey, but the above process is a goal. And what is nice about the already popular process of continuous integration is that the experimental fail and revert process can be implemented in integration environments. There the impact is only on your internal team. In this case, the impact will mostly affect QA, as they will be responsible for testing releases as soon as they are built, so the biggest organizational change exists there. (Still far less than implementing team-wide and in production.)
The bottom line is the tools are available to build faster, fail faster, and fix faster — a process that will not only increase the number of builds you do a year, but also the innovation that can take place to produce newer functionality and quality. Because the tools already exist, the burden is on the team to find a path from their existing release processes to new processes.
This is one of two PagerDuty posts on Continuous. Check out our first one: Are You Ready for Continuous Deployment?