After several months of being down in the weeds with various tools for a customer, I have had an opportunity to come back up for air. The past few months have been fun, but I have neglected this blog in favor of some very tool-centric posts on our team blog that were based around a particular tool suite my customer is using. It was kind of fun doing those and the customer has been great, but there are bigger issues that I prefer writing about. And I have a running backlog of topics that I really want to turn into posts. I hope you find them interesting, too.
Category Archives: Uncategorized
What Makes a Good DevOps Tool?
We had an interesting discussion the other day about what made a “good” DevOps tool. The assertion is that a good citizen or good “link” in the toolchain has the same basic attributes regardless of the part of the system for which it is responsible. As it turns out, at least with current best practices, this is a reasonably true assertion. We came up with three basic attributes that the tool had to fit or it would tend to fall out of the toolchain relatively quickly. We got academic and threw ‘popular’ out as a criteria – though supportability and skills availability has to be a factor at some point in the real world. Even so, most popular tools are at least reasonably good in our three categories.
Here is how we ended up breaking it down:
- The tool itself must be useful for the domain experts whose area it affects. Whether it be sysadmins worried about configuring OS images automatically, DBAs, network guys, testers, developers or any of the other potential participants, if the tool does not work for them, they will not adopt it. In practice, specialists will put up with a certain amount of friction if it helps other parts of the team, but once that line is crossed, they will do what they need to do. Even among development teams, where automation is common for CI processes, I STILL see shops where they have a source control system that they use day-to-day and then promote from that into the source control system of record. THe latter was only still in the toolchain due to a bureaucratic audit requirement.
- The artifacts the tool produces must be easily versioned. Most often, this takes the form of some form of text-based file that can be easily managed using standard source control practices. That enables them to be quickly referenced and changes among versions tracked over time. Closed systems that have binary version tracking buried somewhere internally are flat-out harder to manage and too often have layers of difficulty associated with comparing versions and other common tasks. Not that it would have to be a text-based artifact per se, but we had a really hard time coming up with tools that produced easily versioned artifacts that did not just use good old text.
- The tool itself must be easy to automate externally. Whether through a simple API or command line, the tool must be easily inserted into the toolchain or even moved around within the toolchain with a minimum of effort. This allows quickest time to value, of course, but it also means that the overall flow can be more easily optimized or applied in new environments with a minimum of fuss.
We got pretty meta, but these three aspects showed up for a wide variety of tools that we knew and loved. The best build tools, the best sysadmin tools, even stuff for databases had these aspects. Sure, this is proof positive that the idea of ‘infrastructure as code’ is still very valid. The above apply nicely to the most basic of modern IDEs producing source code. But the exercise became interesting when we looked at older versus newer tools – particularly the frameworks – and how they approached the problem. Interestingly we felt that some older, but popular, tools did not necessarily pass the test. For example, Hudson/Jenkins are weak on #2 and #3 above. Given their position in the toolchain, it was not clear if it mattered as much or if there was a better alternative, but it was an interesting perspective on what we all regarded as among the best in their space.
This is still an early thought, but I thought I would share the thought to see what discussion it would stimulate. How we look at tools and toolchains is evolving and maturing. A tool that is well loved by a particular discipline but is a poor toolchain citizen may not be the right answer for the overall organization. A close second that is a better overall fit might be a better answer. But, that goes against the common practice of letting the practitioners use what they feel best for their task. What do you do? Who owns that organizational strategic call? We are all going to have to figure that one out as we progress.
What would “The Matrix” look like now?
All of the recent talk about matrix organizations has gotten me thinking a bit about “The Matrix” – the movie…
That movie came out in March of 1999 with an R rating. It was therefore targeting folks born in the early 1980s or earlier. A demographic that grew up with the popular image of computers – at least very large ones – as having “green screen” interfaces. Despite the proliferation of WIMP GUIs in the 90s, the classic terminal screen was still a common paradigm when discussing computers. It is also useful to remember that the web was only a few years old in popular culture when this movie was being designed – which probably started in the 1996-1997 timeframe. A time when 56K dial-up was the common smokin’-fast access to relatively simplistic web sites.
So, the iconic visualization of “The Matrix” as a big cascade of green characters makes a great deal of sense.
Since then, we’ve had the explosion of high-speed connectivity to the home and the subsequent advent of rich media websites. By the way, mobile phones are a LOT cooler than the ones Neo & company were using, too. In fact, I daresay my iPhone is more powerful/capable than the computer Neo had on his desk…
But as I watched the movie again with my sons, I realized that they had never really seen a green screen terminal except in a movie. Those glowing green letters were basically as relevant/real to them as a typewriter. The question that came to me at that point was what visualization would make sense to them? My unscientific survey resulted in these description ideas:
- Google or Facebook
- A cloud. With lightening. And maybe some colors.
- iPad – there’d be an app for that…
- Video game – the guys watching would be like watching Call of Duty or Battlefield.
- Maybe an RTS like in Age of Empires
So, there you go. Of course, by the time the inevitable remake/sequel/reboot comes along, I’m sure we’ll have even cooler paradigms.
How a DOMO Fits
In my previous post, I discussed the notion of a DevOps Management Organization or DOMO. As I said there, this is and idea that is showing up under different names at shops of varying sizes. I thought I would share a drawing of one to serve as an example. The basic structure is, of course, a matrix organization with the ability to have each key role present within the project. It also provides for shared infrastructure services such as support and data. You could reasonably easily replace the Business Analyst (BA) role with a Product Owner / Product Manager role and change “Project” to “Product” and have a variant of this structure that I have seen implemented at a couple of SaaS providers around Austin.
This structure does assume a level of maturity in the organization as well as the underlying infrastructure. It is useful to note that the platform is designated as a “DevOps Platform”. It would probably be better to phrase that as a cloud-type platform – public, private or hybrid – where the permanence of a particular image is low, but the consistency and automation is high. To be sure, not all environments have built such an infrastructure, but many, if not most, are building them aggressively. The best time to look at the organization is while those infrastructures are being built and companies are looking for the best ways to exploit them.