Old Habits Make DevOps Transformation Hard

My father is a computer guy. Mainframes and all of the technologies that were cool a few decades ago. I have early memories of playing with fascinating electro-mechanical stuff at Dad’s office and its datacenter. Printers, plotters, and their last remaining card punch machine in a back corner. Crazy cool stuff for a kid if you have ever seen that gear in action. There’s all kinds of noise and things zipping around.

Now the interesting thing about talking to Dad is that he is seriously geeky about tech. Always fascinated by the future of how tech would be applied and he completely groks the principals and potentials of new technology even if he does not really get the specific implementations. Recently he had a problem printing from his iPhone. He had set it up a long time ago and it worked great. He’s 78 and didn’t bat an eye at connecting his newfangled mobile device to his printer. What was interesting was his behavior when the connection stopped working. He tried mightily to fix the connection definition rather than deleting the configuration and simply recreating it with the wizard. That got me thinking about “fix it” behavior and troubleshooting behavior in IT.

My dad, as an old IT guy, had long experience and training that you fix things when they got out of whack. You certainly didn’t expect to delete a printer definition back in the day – you would edit the file, you would test it, and you would fiddle with it until you got the thing working again. After all, you had just the relatively few pieces of equipment in the datacenter and offices. That makes no sense in a situation where you can simply blow the problematic thing away and let the software automatically recreate it.

And that made me think about DevOps transformations in the enterprise.

I run into so many IT shops where people far younger than my dad struggle mightily to troubleshoot and fix things that could (or should) be easily recreated. To be fair – some troubleshooting is valuable and educational, but a lot is over routine stuff that is either well known, industry standard, or just plain basic. Why isn’t that stuff in an automated configuration management system? Or a VM snapshot? Or a container? Heck – why isn’t it in the Wiki, at least?! And the funny thing is that these shops are using virtualization and cloud technologies already, but treat the virtual artifacts the same way as they did the long-lasting, physical equipment-centric setups of generations past. And that is why so many DevOps conversations come back to culture. Or perhaps ‘habit’ is a better term in this case.

Breaking habits is hard, but we must if we are to move forward. When the old ways do not work for a retired IT guy, you really have to think about why anyone still believes they work in a current technology environment.

This article is on LinkedIn here: https://www.linkedin.com/pulse/old-habits-make-devops-transformation-hard-dan-zentgraf

Advertisement

Your Deployment Doc Might Not be Useful for DevOps

One of the most common mistakes I see people making with automation is the assumption that they can simply wrap scripts around what they are doing today and be ‘automated’. The assumption is based around some phenomenally detailed runbook or ‘deployment document’ that has every command that must be executed. In ‘perfect’ sequence. And usually in a nice bold font. It was what they used for their last quarterly release – you know, the one two months ago? It is also serving as the template for their next quarterly release…

It’s not that these documents are bad or not useful. They are actually great guideposts and starting points for deriving a good automated solution to releasing software in their environment. However, you have to remember that these are the same documents that are used to guide late night, all hands, ‘war room’ deployments. The idea that their documented procedures are repeatablly automate-able is suspect, at best, based on that observation alone.

Deployment documents break down as an automate-able template for a number of reasons. First, there are almost always some number of undocumented assumptions about the state of the environment before a release starts. Second, using the last one does not account for procedural, parameter, or other changes between the prior and the upcomming releases. Third, the processes usually unconsciously rely on interpretation or tribal knowledge on the part of the person executing the steps. Finally, there is the problem that steps that make sense in a sequential, manual process will not take advantage of the intrinsic benefits of automation, such as parallel execution, elimination of data entry tasks, and so on.

The solution is to never set the expectation – particularly to those with organizational power – that the document is only a starting point. Build the automation iteratively and schedule multiple iterations at the start of the effort. This can be a great way to introduce Agile practices into the traditionally waterfall approaches used in operations-centric environments. This approach allows for the effort that will be required to fill in gaps in the document’s approach, negotiate standard packaging and tracking of deploy-able artifacts, add environment ‘config drift’ checks, or any of the other common ‘pitfall’ items that require more structure in an automated context.

This article is also on LinkedIn here: https://www.linkedin.com/pulse/your-deployment-doc-might-useful-devops-dan-zentgraf

Automation and the Definition of Done

“Done” is one of the more powerful concepts in human endeavor. Knowing that something is “done“ enables us to move on to the next endeavor, allows us to claim compensation, and sends a signal to others that they can begin working with whatever we have produced. However, assessing done can be contentious – particularly where the criteria are undefined. This is why the ‘definition of done’ is a major topic in software delivery. Software development has a creative component that can lead to confusion and even conflict. It is not a trivial matter.

Automation in the software delivery process forces the team to create a clear set of completion criteria early in the effort, thus reducing uncertainty around when things are ‘done’ as well as what happens next. Though they at first appear to be opposites, with done defining a stopping point and automation being much more about motion, the link between ‘done’ and automation is synergistic. Being good at one makes the team better at the other and vice-versa. Being good at both accelerates and improves the team’s overall capability to deliver software.

A more obvious example of the power of “done” appears in the Agile community. For example, Agile teams often have a doctrine of ‘test driven development’ where developers should write the tests first. Further examples include the procedural concepts for completing the User Stories in each iteration, or sprint, so that the team can clearly assess completion in the retrospective. Independent of these examples, validation-centric scenarios are an obvious area where automation can help underpin “done”. In the ‘test-driven development’ example, test suites that run at various points provide unambiguous feedback over whether more work is required. Those test suites become part of the Continuous Integration (CI) job so that every time a developer commits new code. If those pass, then the build automatically deploys into the integration environment for further evaluation.

Looking a bit deeper at the simple process of automatically testing CI builds reveals how automation forces a more mature understanding of “done”. Framed another way, the fact that the team has decided to have that automated assessment means that they have implicitly agreed to a set of specific criteria for assessing their ‘done-ness’. That is a major step for any group and evidence of significant maturation of the overall team.

That step of maturation is crucial, as it enables better flows across the entire lifecycle. For example, understanding how to map ‘done-ness’ into automated assessment is what enables advanced delivery methodologies such as Continuous Delivery. Realistically, any self-service process, whether triggered deliberately by a button push or autonomously by an event, such as delivering code, cannot exist without a clear, easily communicated, understanding of when that process is complete and how successful it was. No one would trust the automation were it otherwise.

There is an intrinsic link between “Done” and automation. They are mutual enablers. Done is made clearer, easier and faster by automation. Automation, in turn, forces a clear definition of what it means to be complete, or ‘done’. The better the software delivery team is at one, the stronger that team is at the other.

 

This article is also on LinkedIn here: https://www.linkedin.com/pulse/automation-definition-done-dan-zentgraf

Grow a (Delivery) Backbone

Feature delivery in a DevOps/Continuous Delivery world is about moving small batches of changes from business idea through to production as quickly and frequently as possible. In a lot of enterprises, however, this proves rather problematic. Most enterprises organize their teams more around functional teams rather than value delivery channels. That leaves the business owners or project managers with the quandary of how to shepherd their changes through a mixture of different functional teams in order to get releases done. That shepherding effort means negotiating for things like schedules, resources, and capacity for each release– all of which take time that simply does not exist when using a DevOps approach. That means that the first step in transforming the release process is abandoning the amoebic flexibility of continuous negotiations in favor of something with more consistency and structure –a backbone for continuously delivering changes.

The problem, of course, is where and how to start building the backbone. Companies are highly integrated entities: disrupting one functional area tends to have a follow-on effect to other functions. For example, dedicating resources from shared teams to a specific effort implies backfilling those resources to the other efforts those resources previously supported. That takes time and money that are better spent. This is why automation very quickly comes to the forefront of any DevOps discussion as the driver of the flow. Automation is relatively easy to stand up in parallel to existing processes, it lends itself to incremental enhancement, and it has the intrinsic ability of multiplying the efforts of a relatively small number of people. It is relatively easy to prove the ROI of automating business processes; which, really, is why most companies got computers in the first place.

So, if automation is the backbone of a software delivery flow, how do you get to an automated flow? Most organizations follow three basic stages as they add automation to their software delivery flow.

  1. Continuous Integration to standardized artifacts (bonus points for a Repository)
  2. Automated Configuration Management (bonus for Provisioning)
  3. Orchestration

Once a business has decided that it needs more features, sooner from a software development organization, the development teams will look at Agile practices. Among these practices is Continuous Integration (supporting the principle from the Agile manifesto that “working software is the primary measure of progress”). This will usually take the form of a relatively frequent build process that produces a consumable form of the code. The frequency of delivery creates a need for consistency in the artifacts produced by the build and a need to track the version progression of those artifacts. This can take the form of naming conventions, directory structures, etc., but eventually will lead to a binary artifact repository.

The availability of frequent updates stimulates a demand for more deployments to more consistent test environments and even to production. The difficulties driven by traditional test lab Configuration Management practices, the drift between test labs and production, and the friction pushing back on iterations causes Development to suddenly become a lot more interested in how their code actually gets into the hands of consumers. The sudden attention and spike in workload causes Operations to suddenly become very interested in what Development is doing. This is the classic case for why DevOps became such a topic in IT. The now classic solution for solving this immediate friction point is to use an automated configuration management solution to provide consistent, high-speed enforcement of a known configuration into all environments where the code will run. These systems are highly flexible, model and script-based and enable environment changes to be versioned in a way that is very consistent with how developed code is versioned. The configuration management effort will usually evolve to include integration to provisioning systems that enable rapid creation, deletion, or refresh of entire environments. Automating Configuration Management, then, becomes the second piece of the delivery backbone; however, environmental sprawl and lack of feedback loops quickly show the need for another level.

The third stage of growing a delivery backbone is Orchestration. It deals with the fact that there are so many pieces to modern application systems that it is inefficient and impractical for humans to track and manage them all in every environment to which those pieces might be deployed. Orchestration systems deal with deliberate sequencing of activities, integration with human workflow, and management of the pipeline of changes in large, complex environments. Tools that deal with this level of activity address the fundamental fact that, even in the face of high-speed automated capabilities, some things have to happen in a particular order. This can be due to technical requirements, but it is often due to fundamental business requirements: coordinating feature improvements with consuming user communities, coordinating with a marketing campaign, or the dreaded regulatory factors. Independent of the reason or need, Orchestration tools allow teams to answer questions of ‘what is in which environment’, ‘are the dependent pieces set’, and ‘what is the actual state of things over there’. In other words, as the delivery backbone grows, Orchestration delivers the precise control over the backbone and enables the organization to capture the full value of the automation.

These three stages of backbone growth, perhaps better termed ‘evolution’ may take slightly different forms in different organizations. However, they consistently appear as a natural progression as DevOps thinking becomes pervasive in organizations and those organizations seek to mature and optimize their delivery flows. Organizations seeking to accelerate their DevOps adoption can use variations on this pattern to deliberately evolve or accelerate the growth of their own delivery backbone.

 

This article is also on LinkedIn here: https://www.linkedin.com/pulse/grow-delivery-backbone-dan-zentgraf