DevOps is NOT a Job Title

Given my recent posts about organizational structure, I feel like I need to clarify my stance on this…

You know a topic is hot when recruiters start putting it in job titles.  I do believe that most organizations will end up with a team of “T-shaped people” focused on using DevOps techniques to ensure that systems can be support an Agile business and its development processes.  However, I am not a fan of hanging DevOps on the title of everyone involved.

Here’s the thing, if you have to put it in the name to convince yourself or other people you are doing it, you probably are not.  And the very people you hope to attract may well avoid your organization because it fails the ‘reality’ test.  In other words, you end up looking like you don’t get it.  A couple of analogies come to mind immediately.

  • First, let’s look at a country that calls itself the “People’s Democratic Republic of” somewhere.  That is usually an indicator that it is not any of those modifiers and the only true statement is the ‘somewhere’ part.  Similarly, putting “DevOps Sysadmin” on top of a job description that, just last week, said “Sysadmin” really isn’t fooling anyone.
  • Second, hanging buzzwords on job titles is like a 16 year old painting racing stripes on the four door beater they got as their first car.  With latex house paint.  You may admire their enthusiasm and optimism.  You certainly wish them the best.  But you have a pretty realistic assessment of the car.

Instead, DevOps belongs down in the job description.  DevOps in a job role is a mindset and an approach used to define how established skills are applied.  You are looking for a Release Manager to apply DevOps methods in support of your web applications.  Put it down in the requirements bullet points just as you would put things like ‘familiar with scripting languages’, ‘used to operating in an [Agile/Lean/Scrum] environment]’, or ‘experience supporting a SaaS infrastructure’.

I realize that I am tilting at windmills here.  We went through a spate of “Agile” Development Managers and the number of  “Cloud” Sysadmins is just now tapering.  So, I guess it is DevOps’ turn.  To be sure, it is gratifying and validating to see such proof that DevOps is becoming a mainstream topic.  I should probably adopt a stance of ‘whatever spreads the gospel to the masses’.  But I really just had to get this rant off my chest after seeing a couple of serious “facepalm” job ads.

How a DOMO Fits

In my previous post, I discussed the notion of a DevOps Management Organization or DOMO.  As I said there, this is and idea that is showing up under different names at shops of varying sizes.  I thought I would share a drawing of one to serve as an example.  The basic structure is, of course, a matrix organization with the ability to have each key role present within the project.  It also provides for shared infrastructure services such as support and data.  You could reasonably easily replace the Business Analyst (BA) role with a Product Owner / Product Manager role and change “Project” to “Product” and have a variant of this structure that I have seen implemented at a couple of SaaS providers around Austin.

This structure does assume a level of maturity in the organization as well as the underlying infrastructure.  It is useful to note that the platform is designated as a “DevOps Platform”.  It would probably be better to phrase that as a cloud-type platform – public, private or hybrid – where the permanence of a particular image is low, but the consistency and automation is high.  To be sure, not all environments have built such an infrastructure, but many, if not most, are building them aggressively.  The best time to look at the organization is while those infrastructures are being built and companies are looking for the best ways to exploit them.

Organization with DOMO

Rise of the DOMO

A lot of the DevOps conversations I have had lately have been around organization issues and what to do about the artificial barriers and silos that exist in most shops.  Interestingly there is a pattern emerging among those discussing or even implementing changes to deal with this discussion.  The pattern involves matrix organizations and the rise of what I call a “DevOps Management Organization” (DOMO).  The actual names vary, but the role of the organization is consistent.

Most software delivery organizations end up with some kind of matrix where product managers, project managers, engineers, architects, QA are all tied to the success of a given project / product while maintaining a discipline-specific tie.   In the case of an ISV, you can add some other disciplines around fulfillment, support, etc.  The variable is whether the project/ product is their direct reporting organization or they report to a discipline.  And the answer can depend on the role.  For example, a Project Manager might be part of the engineering organisation and be a participant in the company’s PMO.  Or they might be part of the PMO and simply assigned to (and funded by) the project / product they are working on.

When you add Agile to the mix, the project dimension tends to take on a much higher level of primacy over the disciplines since the focus becomes much tighter on getting working software produced on a much tighter iteration timeline.  This behavior leads to the DevOps discussion as the project/product team discovers that there is no direct alignment of Operations to the project efforts.  Additionally, Operations is a varied / multi-disciplinary space in and of itself.  Thus it becomes extremely difficult for the project/product team to drive focused activity through Operations to deliver a particular iteration/release.  The classic DevOps problem.

The recent solution trend to this problem has been the creation of what I call “Ops Sherpa” roles in the project/product teams.  This role is a cross-disciplinary Ops generalist who is charged with understanding the state of the organization’s operations environment and making sure that the development effort is aligned with operational realities.  That includes full lifecycle responsibility – from ensuring that Dev and QA environments are relevant equivalent configurations to production in order make sure deliverables are properly qualified to making sure that various Operations disciplines are aware of (and understand) any changes that will be required to support a particular release deliverable.  In more mature shops, this may grow out of an existing Release Management role or, if a particularly large

Critically, though, this role gets matrixed back into the Operations organization at a high enough level to sponsor cross-operational silo action.  This point is the head of what I call the Head of the DOMO and provides the point of leverage to deal with tactical problem as well as the strategic guidance role to drive cross-project continuous improvement into the operations platform space in support of faster execution (aka DevOps speed releases).

Whatever the name,  the fact that large scale companies are recognizing the value of deliberately investing in this space is a validation that being good at release execution is strategic to cost-effectively shortening release cycles.

Start Collaboration with Teaching

Every technology organization should force everyone in the group to regularly educate the group on what they are doing.  This should be a cross-discipline activity – not a departmental activity.  There are three reasons to do this.  The first is obvious – there is an intrinsic value in sharing the knowledge.  The second is that the teachers themselves get better at what they are teaching about for the reasons described above.  The third is that it serves to create relationships among the groups that will open channels of collaboration as the organization grows.

This will create more opportunities for someone to have a critical insight on a situation and invent something valuable as a result.  It may be as basic as the fact that the team is faster at solving problems because they know who to call and have a relationship with that person.  It also means that you have a better chance of keeping your ‘bus number at healthier levels thereby making your organization more resilient overall.  Of course, it will also make your overall organization more cohesive meaning people will be somewhat more likely to stay and ensuring that you have fewer ‘bus number’ situations in the first place – or at least fewer that were not caused by a bus

Classic Metrics for How Good You Are

One of the ‘best metrics ever’ is the classic “bus number”.  This measures how many people in an organization can be hit by a bus before that organization’s operations or progress is severely hindered due to that person’s absence.  This was a slightly funny way of measuring resilience of an organization versus the anti-pattern of knowledge hoarding in an individual’s brain.  The idea is that a resilient organization should have a very high bus number and not be vulnerable to ‘critical staff’.

Think about it next time you are looking at any part of your system.  Ask yourself who you would ask about that particular module, image, or whatever.  Then ask yourself who you would go to if the first person was unavailable.  How confident are you that you would quickly / expediently get your answer?  How confident are you that you could just look the information up in a Wiki or other documentation?

If you look at the questions above and start thinking that ‘we would figure it out after a while’ or making other excuses, you minimally have a problem with communications and collaboration.  You almost certainly have a process problem.  And you may well have a cultural problem.  Make no mistake, what it means is that your team/organization/project are playing in traffic and simply waiting for the inevitable to happen.

And when something does happen, say, for example, one of the project’s “hero coders” takes a new job, it will be miserable for all who remain as they try to figure out what the hero was doing.  Meanwhile the project’s progress languishes and the deadline becomes unachievable.  Morale goes down as frustration goes up.  Maybe someone else decides to leave out of a sense of futility; making the problem worse.  And it will have been completely avoidable.  It will be completely the fault of the leadership that was either not assertive enough to make the hero share their knowledge or undisciplined enough to not include sustainability in their coaching, plans, and day-to-day execution priorities.

This is serious stuff and is worth the investment of time to solve.  The habit of focusing on the overall sustainability of the organization is well documented as something that successful, resilient, and sustainable organizations emphasize.  This is well documented in the classic book “Good to Great” by Jim Colins, where the book describes the organizations being built as sophisticated machines using the analogy of clock building.  The notion is simple, really.  That the goal is to build a lasting thing that continues on as people come and go.  The project / organization must be bigger than any individual, the individuals involved must understand that, and management must encourage or enforce that mindset.  In the book, the organizations that did this radically outperformed their peers in the same markets in the same timeframe.

The reality is that you will probably always have some stuff (ideally only non-critical or very new stuff) that is not well disseminated, but take those in the lens of what they are and triage / prioritize them so that you do not accumulate the knowledge gap as technical debt.  Or, if you do, you should do so consciously, visibly, and at a level at which you know you can tolerate the risk.

Change Mis-management (Part 3)

For part three of the Change Mis-management series, I want to pick on the tradition of NOT keeping system management scripts in version control.  This is a fascinating illustration of the cultural difference between Development and Operations.  Operations is obsessed with ensuring stability and yet tolerates fairly loose control over things that can decimate the environment at the full speed of whatever machine happens to be running the script.  Development is obsessed with making incremental changes to deliver value and would never tolerate such loose control over their code.  I have long speculated that this level of discipline for Development is in fact a product of the fact that they have to deal with and track a LOT of change.

Whatever the cause and whether or not you believe in Agile and/or the DevOps movement, this is really a fundamental misbehavior and we all know it.  There really is no excuse for not doing it.  Most shops have scripts that control substantial swaths of the infrastructure.  There are various application systems that depend on the scripts to ensure that they can run in a predictable way.  For all intents and purposes these scripts represent production-grade code.

This is hopefully not a complex problem to explain or solve.  The really sad part is that every software delivery shop of any size already has every tool needed to version manage all of their operations scripts.  There is no reason that there can’t be an Ops Scripts tree in your source control system.  Further, those repositories are often set up with rules that force some sort of notation for the changes that are being put into those scripts and will track who checked it in, so you have better auditing right out of the gate.

Further, you now have a way to, if not know, then at least have a good idea, what has been run on the systems.  That is particularly important if the person who ran the script is not available for some reason.  If your operations team can agree on the doctrine always running the ‘blessed’ version and never hack it on the filesystem, then life will get substantially better for everyone.  Of course, the script could be changed after checkout and the changes not logged.  Any process can be circumvented – most rather easily when you have root.  The point is to make such an event more of an anomaly.  Maybe even something noticeable – though I will talk about that in the next part of this series.

This is really just a common-sense thing that improves your overall organizational resilience.  Repeat after me:

  • I resolve to always check in my script changes.
  • I resolve to never run a script unless I have first checked it out from source to make sure I have the current version.
  • I resolve to never hack a script on the filesystem before I run it against a system someone other than me depends on.  (Testing is allowed before check-in; just like for developers)
  • I resolve to only run scripts of approved versions that I have pulled out of source control and left unmodified.

It is good, it is easy, it does not take significant time to do and saves countless time-consuming screw-ups.  Just do it.

Change Mis-management (part 1)

One of the pillars of DevOps thinking is that the system is a whole.  No part can function without the others, so they all should be treated as equals.  Of course, things rarely work that way.  One of the glaring examples in a lot of shops is the disparity in the way changes are managed / tracked between Dev and Ops.  There are multiple misbehaviors we can examine in just this one area.  Some other day we can discuss how there are different tracking systems for different parts of the system and how many shops have wholly untracked configurations for some components.  Today, instead, we’re going to talk about the different levels of diligence that get applied in Ops versus Dev when dealing with change.

Think about this for a second.  No developer in an enterprise shop would ever think of NOT checking in all of their code changes.  And if they did, they would view it as a pretty serious bypass of good practice.  Code goes into the repository and is pulled from that repository for build, test, and deployment.  It is a backbone constant practice of commercial software development.  Meanwhile, the Ops team has a bunch of scripts that they use to maintain the environment.  How many of those are religiously checked into a version control system (VCS)?  And of those that do end up in a VCS, how many have change tickets attached when they are modified?  And then there are the VM template images, router configs, etc. that may / may not be safely stored someplace.

All too often the change management that happens here is a script update or a command executed someplace on some piece of infrastructure.  The versioning takes the form of a file extension; you know –  “.old”, “.bak”, “.orig” “.[todaysdate]” so that there is some… evidence… that a change was made to the system.  The tracking of the change is often a manually updated trouble ticket or change request.  And let’s not forget that the Ops ticketing system probably does not talk to the change management system the developers use.  Is it any wonder that things get screwed up when something comes down the pipe from Dev?

To really have things working properly, you have to:

  • Unify the change management between Ops and Dev
  • Track scripts the way you would any source code on which your apps function depends.
  • Have a method to automatically capture changes made to the environment and log them.

All three of these things are necessary if you really want to achieve a higher-speed and more predictable release process.

“Enterprise” DevOps

Anyone in the IT industry today will note that much of the DevOps discussion is focused on small companies with large websites – often tech companies providing SaaS solutions, consumer web services, or some other solution content.  There is another set of large websites, supported by large technology organizations that have a need for DevOps.  These are large commerce sites for established retailers, banks, insurance companies, etc.  Many of these companies have had large-scale online presences and massive software delivery organizations behind them for well over a decade now.  Some of these enterprises would, in fact, qualify among the largest software companies on the planet dwarfing much more ‘buzz-worthy’ startups.  It also turns out that they are pretty good at delivering to their online presence predictably and reliably – if not as agilely – as they would like.

Addressing the agility challenge in an enterprise takes a different mindset than it does in a tech startup.  This has always been the case, of course.  Common sense dictates that solving a problem for 100 people is intrinsically different than for 10,000.  And yet so many discussions focus on something done for a ‘hot’ website or maybe a large ‘maverick’ team in a large organization.  And those maverick team solutions more often than not do not scale to the enterprise and have to be replaced.  Of course, that is rarely discussed or hyped.  They just sort of fade away.

It is not that these faded solutions are bad or wrong, either.  A lot of times, the issue is simply that they only looked at part of the problem and did not consider the impact of improving that part on the other parts of the organization.  In large synchronized systems, you can only successfully accelerate or decelerate if the whole context does so together.  There are many over-used analogies for this scenario, so let’s use the one about rowers on a boat to make this point and move on.

Let’s face it, large organizations can appear to be the “poster children” for silos in their organizational structure.  You have to remember, though that those silos often exist because the organization learned hard lessons about the value of NOT having someone maniacally focused on one narrow, specialized activity.  Think about this as your organization grows, runs into problems or failures, and puts infrastructure in place to make sure it doesn’t happen again.

One of the main points of this blog is going to be to look at the issues confronted by organizations that are or are becoming ‘enterprises’ and how they can balance the need for the Agile flexibility of DevOps with the pragmatic need to synchronize large numbers of people.