For those of you familiar with military jargon, you probably know what fire-and-forget is. It’s rather easy to deduce the meaning of this term, but just for clarity, I’ll briefly explain. Fire-and-forgot means firing a missile without worrying about guiding it to its target. In-place systems such as laser and radar guidance take care of that.
Why this crude introduction? Because fire-and-forget tends to manifest itself in the software development world as well. The developer(s) has checked in code, submitted a PR, and as far as they are concerned, if the PR is merged, then they are done. Meaning, they fired a missile (checked-in), guidance systems calibrated the route (various tests), and if the missile hits (PR is merged), then they are done.
But that’s exactly the point. If I aim to harm the system, then this approach is understandable. But my aim is to integrate my changes into the codebase, not ruin the system. This is why I used such a crude analogy. I used it to highlight an attitude of forgoing responsibility after a PR merge.
The state of CI/CD
A recent survey showed that a lot of teams practice continuous integration but not necessarily continuous delivery. Continuous delivery adoption rates are still relatively low and are also hard to measure and evaluate.
But there’s a bigger problem here I believe. And maybe that problem is reflective of why continuous delivery adoption rates are still low. It’s not about continuously delivering features. It’s about what happens before and after delivery, on the team effort level. And that goes beyond merging code and deploying it.
So I tested it
In the practice of CI/CD, testing is integral. I branch off the master and work towards introducing features, improving performance, or fixing bugs. I then run unit testing, and if they pass, I merge. If I’m meticulous, then I’ll also run integration testing. Depending on the scope of our responsibility, I might even run system and acceptance testing.
For that I say, so what if I tested it? A friend of mine works for a company that develops tools for running automated UI testing for mobile and web apps. They have this joke that if you ran a suite of tests and they all passed, you probably did something wrong. The idea is that if everything goes smoothly, chances are your test coverage is lacking.
So you ran some tests, and they all passed. Your code is then merged. But wait, what if you’re responsible for developing internal API that other teams rely on? How do you know that you didn’t introduce any breaking changes? You can run integration testing of course, but that only goes as far as your knowledge of your own code and is limited to the scope of the feature that you developed. How can you tell what ways the other teams use your API? When you test for API usage, internal or external, you’re biased towards your own code. And let’s face it, we never really manage to cover all scenarios. Automated API testing with actual data and actual web traffic is the best way to check that your new code didn’t introduce any bugs or regression into the whole system. So putting our API out there against real data and real traffic is the best way to test that it remains functional and resilient.
But thorough testing and code coverage are relatively easy compared to what I think is to be expected from a dev after a PR merge. It mostly involves owning up to one’s work, assuming responsibility and maintaining synchronous communication. And I believe that that’s keeping teams from adopting continuous delivery.
Be there
I was talking to a colleague not long ago about this whole merge and forget thing. He shared with me his own experience attending a sync meeting before deploying two new features. What he remembered the most was the team lead stressing over and over the importance of monitoring post-deployment. The team lead appointed the two devs who contributed most of the code to be the ones to raise the alarm if anything goes wrong so that a rollback can take place as soon as possible. They were also to lead a post mortem investigation if anything went wrong. So you see that merging code doesn’t end the devs work. The code was already merged, QA authorized deployment, documentation was ready and marketing was ready to blast customers. Still, the whole dev team was there to prepare for d-day.
That’s a lesson they don’t teach you at school and something no one talks about when explaining what continuous delivery is all about. When reading up on continuous delivery, it’s usually summed up as test, test, test. Everything is fine? Click the button to deploy. Moreover, some even go as far as promoting the idea that once your code is merged, your job is done. It’s understandable why some CI/CD service providers would want to market themselves this way. After all, telling devs that all they have to do is merge and forget is a way of charming them into using their services. But this notion is wrong and encourages devs to forgo responsibility over their code. That, in turn, can be the root cause of many failed deployments. The important stuff that no one talks about is owning up to one’s code and work and being there when it’s delivered to make sure that it’s actually working.
Think universally, not locally
If talking about communication and the roles that devs need to assume besides writing code sounds like “yeah ok we know there’s stuff to do other than code up features”, I don’t blame you. Everything I wrote above might seem trivial and common sense, but I’ll allow myself to hit you with some clichés. First, common sense isn’t that common. Second, there’s this quote that says that employees do the minimum not to get fired while employers pay the minimum to keep the people working. Not all devs and teams recognize the extras that come with the territory.
However, I still recognize that some would want to read more about what they can do or should do to facilitate continuous delivery. Well, let’s go back to what I talked about at the beginning. Today’s software development world, especially web services, has a lot of moving parts, i.e. microservices. If I develop a feature, internal or external, I need to think about, design for, and test for the whole picture and not just for my own little world. That my unit testing passed could mean that they passed my own biases. After all, I was the one who wrote the code. That integration testing passed could mean that parts relying on my code or the parts my code relies on, work well together. But what if we introduced bottlenecks into the whole system?
So it goes beyond testing. It goes into staying committed to whatever we committed. It means assuming responsibility for our code. It’s about doing the extra beyond writing beautiful code. It’s beyond being able to deliver fast. It’s about being able to deliver efficiently and, most importantly, gracefully recovering from any failures. And It boils down to staying connected with our team and work, even after, and especially after, we released code into the wild.