Agility Comes from Knowledge

One of the members at Agile Austin is fond of saying that ‘the only true source of Agility is knowledge’.  I think that is very true in a lot of situations.  The more you know and understand, the more adaptable you can be.  It might be  a geek-spun buzzword version of the old aphorism that “knowledge is power”, but that doesn’t mean that it is in any way bad.  Indeed, old aphorisms, rephrased or not, stand the test of time because they speak to human nature.  For all of our technology, we’re still pretty much the same.

So, where does this cultural comment hit DevOps?  Many places, really, but today I am going to pick on the fact that Agile and DevOps require participants to know and understand more than what would have been present in their traditional job role.  It is no longer OK to just be the best coder or sysadmin or architect.  You have to maintain a much higher level of generalization to be effective in your specialized job role.

This can hurt people’s heads a bit.  Particularly in larger organizations where the message has been to specialize and be the best [technical role] that you can be.  The irony is that larger organizations tend to have large and complex application systems.  So, they have traditionally compensated by having teams of people who specialize in fitting things together across the specialties.  While this certainly works, there is often a lot of time spent on rework and polish to get the pieces to all fit together.  That also implies time, which directly impacts the responsiveness (agility) of the development organization to the business.

Now those organizations are faced with needing to retool their very culture (and the management structures entwined within it) to place some amount of value on generalization for their people.  That means deliberately encouraging staff to learn more and more about the “big picture” from all aspects – not just technical.  That means deliberately DIS-couraging isolationism in specific disciplines.  It also means deliberately blowing up organizational fiefdoms before they take hold.  And it means rewarding behaviors that focus on achieving the larger goals of the organization while rooting out incentives on very parochial behaviors

The funny thing is that this is not new.  When I was first a manager, I worked for a company obsessed with this sort of thing.  We were very high on the notion of ‘lifetime learning’ and organizational development in general.  It was a way that the company encouraged/taught/focused people to aggressively adapt to the changes that came with fast growth.  That company returned more to its investors than any other tech startup I have seen in a long time.  We never worried about solving problems -we all understood a lot about the business and had a common understanding of how it worked.  It was easy and fast to get people working on a problem because we did not have to waste time bringing people ‘up to speed’.  We knew how it fit together and understood the value of proactively pushing it into newbies’ heads.  They wouldn’t be newbies for long, after all.

This month’s book club selection  at Agile Austin is focusing on Peter Senge’s keystone work in this area – “The Fifth Discipline”.  I have not read it in a while, but it is damn good to hear people focusing on this stuff again.  I really liked a lot of the concepts in that book; probably because they are relatively timeless as it relates to human nature / behavior.

Change Mis-management (Part 3)

For part three of the Change Mis-management series, I want to pick on the tradition of NOT keeping system management scripts in version control.  This is a fascinating illustration of the cultural difference between Development and Operations.  Operations is obsessed with ensuring stability and yet tolerates fairly loose control over things that can decimate the environment at the full speed of whatever machine happens to be running the script.  Development is obsessed with making incremental changes to deliver value and would never tolerate such loose control over their code.  I have long speculated that this level of discipline for Development is in fact a product of the fact that they have to deal with and track a LOT of change.

Whatever the cause and whether or not you believe in Agile and/or the DevOps movement, this is really a fundamental misbehavior and we all know it.  There really is no excuse for not doing it.  Most shops have scripts that control substantial swaths of the infrastructure.  There are various application systems that depend on the scripts to ensure that they can run in a predictable way.  For all intents and purposes these scripts represent production-grade code.

This is hopefully not a complex problem to explain or solve.  The really sad part is that every software delivery shop of any size already has every tool needed to version manage all of their operations scripts.  There is no reason that there can’t be an Ops Scripts tree in your source control system.  Further, those repositories are often set up with rules that force some sort of notation for the changes that are being put into those scripts and will track who checked it in, so you have better auditing right out of the gate.

Further, you now have a way to, if not know, then at least have a good idea, what has been run on the systems.  That is particularly important if the person who ran the script is not available for some reason.  If your operations team can agree on the doctrine always running the ‘blessed’ version and never hack it on the filesystem, then life will get substantially better for everyone.  Of course, the script could be changed after checkout and the changes not logged.  Any process can be circumvented – most rather easily when you have root.  The point is to make such an event more of an anomaly.  Maybe even something noticeable – though I will talk about that in the next part of this series.

This is really just a common-sense thing that improves your overall organizational resilience.  Repeat after me:

  • I resolve to always check in my script changes.
  • I resolve to never run a script unless I have first checked it out from source to make sure I have the current version.
  • I resolve to never hack a script on the filesystem before I run it against a system someone other than me depends on.  (Testing is allowed before check-in; just like for developers)
  • I resolve to only run scripts of approved versions that I have pulled out of source control and left unmodified.

It is good, it is easy, it does not take significant time to do and saves countless time-consuming screw-ups.  Just do it.

Change Mis-management (Part 2)

In my last post, I mentioned three things that need to be reliably happening in order to achieve a faster, more predictable release process.  The first one was to unify change management for the system between the Ops and Development sides.  On the surface,  this sounds like a straightforward thing.  After all a tool rationalization exercise is almost routine in most shops.  It happens regularly due to budget reviews, company mergers or acquisitions, etc.

Of course, as we all know, it is never quite that easy even when the unification is happening for nominally identical teams.  For example, your company buys another and you have to merge the accounting system.  Pretty straightforward – money is money, accountants are accountants, right?  Those always go perfectly smoothly, right?  Right?

In fact, unifying this aspect of change management can be especially thorny because of the intrinsic differences in tradition between the organizations.  Even though complex modern systems evolve together as a whole, few sysadmins would see themselves as ‘developing’ the infrastructure, for example.  Additionally, there are other problems.  For instance, operations are frequently seen as service providers who need to provide SLAs for change requests that are submitted through the change management system.  And a lot of operational tracking in the ticketing system is just that – operational – and it does not relate to actual configuration changes or updates to the system itself.

The key to dealing with this is the word “change”.  Simplified, system changes should be treated in the same way as code changes are handled.  Whatever that might be.  For example, it could be a user story in the backlog.  The “user” might be a middleware patch that a new feature depends on and the work items might be around submitting tickets to progressively roll that up the environment chain into production.  The goal is to track needed changes to the system as first-class items in the development effort.  The non-change operational stuff will almost for sure stay in the ticketing system.  A simple example, but applying the principle will mean that the operating environment of a system evolves right along with its code – not as a retrofit or afterthought when it is time to deploy to a particular environment or there is a problem.

The tool part is conceptually easy – someone manages the changes in the same system that backlog/stories/work items are handled.  However, there  is also the matter of the “someone” mentioned in that sentence.  An emerging pattern I have seen in several shops is to cohabitate an Ops-type with the development team.  Whether these people are the ‘ops representative’ or ‘infrastructure developers’ their role is to focus on evolving the environment along with the code and ensuring that the path to production is in sync with how things are being developed.  This is usually a relatively senior person who can advise on things, know when to say no, and know when to push.  The real shift is that changes to the operating environment of an application become first-class citizens at the table when code or test changes are being discussed and they can now be tracked as part of the work that is required to deliver on an iteration.

These roles have started popping up in various places with interesting frequency.  To me, this is the next logical step in Agile evolution.  Having QA folks in the standups is accepted practice nowadays and organizations are figuring out that the Ops guys should be at the table as well.  This does a great job of pro-actively addressing a lot of the release / promotion headaches that slow things up as things move toward production.  Done right, this takes a lot stress and time out of the overall Agile release cycle.

New Toy!!! IBM Workload Deployer

The company I work for serves many large corporations in our customer base, many of whom are IBM shops with the commensurately large WebSphere installed bases.  So, as you might imagine, it behooves us to keep abreast of the latest stuff IBM delivers.

We are fortunate enough to be pretty good at what we do and are in the premiere tier of IBM’s partner hierarchy and were recently able to get an IBM Workload Deployer (IWD) appliance in as an evaluation unit.  If you are not familiar, the IWD is really the third revision of the appliance formerly known as the IBM WebSphere Cloudburst appliance.  I do not know, but I would presume the rebrand is related to the fact that the IWD is handling more generic workloads than simply those related to WebSphere and therefore deserved a more general name.

You can read the full marketing rundown on the IBM website here:  IBM Workload Deployer

This is a “cloud management in a box” solution that you drop onto your network, point at one one or more of the supported hypervisors, and it handles images, load allocation, provisioning etc.  You can give it simple images to manage, but the thing really lights up when you give it “Patterns” – a term which translates to a full application infrastructure (balancing webservers, middleware, DB, etc.).  If you use this setup, the IWD will let you manage your application as a single entity and maintain the connections for you.

I am not an expert on the thing – at least not yet, but a couple of other points that immediately jump out at me are:

  • The thing also has a pretty rich Python-based command line client that should allow us to do some smart script stuff and maintain those in a proper source repository.
  • The patterns and resources also have intelligence in them where you can’t break dependencies of a published configuration
  • There are a number of pre-cooked template images that don’t seem very locked down that you can use as starter points for customization or you can roll your own.
  • The Rational Automation Framework tool is here, too, so that brings up some migration possibilities for folks looking to bring apps either into a ‘cloud’ or a better managed virtual situation
I do get to be one of the first folks to play with the thing, so I’ll be drilling into as many of these these and other things as time permits.  More on it as it becomes available.

Change Mis-management (part 1)

One of the pillars of DevOps thinking is that the system is a whole.  No part can function without the others, so they all should be treated as equals.  Of course, things rarely work that way.  One of the glaring examples in a lot of shops is the disparity in the way changes are managed / tracked between Dev and Ops.  There are multiple misbehaviors we can examine in just this one area.  Some other day we can discuss how there are different tracking systems for different parts of the system and how many shops have wholly untracked configurations for some components.  Today, instead, we’re going to talk about the different levels of diligence that get applied in Ops versus Dev when dealing with change.

Think about this for a second.  No developer in an enterprise shop would ever think of NOT checking in all of their code changes.  And if they did, they would view it as a pretty serious bypass of good practice.  Code goes into the repository and is pulled from that repository for build, test, and deployment.  It is a backbone constant practice of commercial software development.  Meanwhile, the Ops team has a bunch of scripts that they use to maintain the environment.  How many of those are religiously checked into a version control system (VCS)?  And of those that do end up in a VCS, how many have change tickets attached when they are modified?  And then there are the VM template images, router configs, etc. that may / may not be safely stored someplace.

All too often the change management that happens here is a script update or a command executed someplace on some piece of infrastructure.  The versioning takes the form of a file extension; you know –  “.old”, “.bak”, “.orig” “.[todaysdate]” so that there is some… evidence… that a change was made to the system.  The tracking of the change is often a manually updated trouble ticket or change request.  And let’s not forget that the Ops ticketing system probably does not talk to the change management system the developers use.  Is it any wonder that things get screwed up when something comes down the pipe from Dev?

To really have things working properly, you have to:

  • Unify the change management between Ops and Dev
  • Track scripts the way you would any source code on which your apps function depends.
  • Have a method to automatically capture changes made to the environment and log them.

All three of these things are necessary if you really want to achieve a higher-speed and more predictable release process.

A Sports Analogy for DevOps Thinking

I have been known to go off a bit on how typical management culture self-defeats on its attempts to execute more quickly.  This is a pretty common cultural problem as much as management problem.  Here is some perspective.  Football (American Football, that is) has an ineligible receiver rule.  The roles of the individual players, on offence especially, are so specialized that only certain people can receive a pass.  Seriously.  Then there are very specialized ‘position coaches’ who make sure that individual players focus on the subset of skills they need to perform their specific job.  There is also very little cross-training.  This works fine in the very iterative, assembly-line way the game is played.  Baseball is the same way – very specialized.  And both are quintessentially 20th century American games that grew during (and reflect) an industrial mindset.

However, business is a free-flowing process.  There are no ‘illegal formations’.  Some work.  Some don’t.  The action does not stop.  A better game analogy for releasing software (or running IT, or even the whole business) in the modern era would be Soccer (Football in the rest of the world).  The game constantly flows.  There are no codified rules about who passes the ball to whom.  The goalie is actually only special in that he can use his hands – when standing in his little area.  There is no rule that says it is illegal for him to come out of that area and participate as a regular player.  This occasionally happens in the course of elimination tournaments, in fact.

I draw this comparison to point out the relative agility of a soccer team to adapt to an ever changing game flow.  Football teams only function when there is a very regulated flow of events and where there are a number of un-realistic throttles and limits on the number and types of events.  When you compare this to how most IT shops are set up, you find a lot of Football teams and very few soccer teams.  And guess which environments are seen as more adaptable to the needs of their overall organizations…

Authors note:  I picked soccer over hockey and basketball principally because the latter two sports rely heavily on rapid substitution and aggressive use of timeouts.  Those are luxuries that modern online business most certainly does not have.  Substitutions happen slowly in the enterprise and there darn sure are no timeouts.

“Enterprise” DevOps

Anyone in the IT industry today will note that much of the DevOps discussion is focused on small companies with large websites – often tech companies providing SaaS solutions, consumer web services, or some other solution content.  There is another set of large websites, supported by large technology organizations that have a need for DevOps.  These are large commerce sites for established retailers, banks, insurance companies, etc.  Many of these companies have had large-scale online presences and massive software delivery organizations behind them for well over a decade now.  Some of these enterprises would, in fact, qualify among the largest software companies on the planet dwarfing much more ‘buzz-worthy’ startups.  It also turns out that they are pretty good at delivering to their online presence predictably and reliably – if not as agilely – as they would like.

Addressing the agility challenge in an enterprise takes a different mindset than it does in a tech startup.  This has always been the case, of course.  Common sense dictates that solving a problem for 100 people is intrinsically different than for 10,000.  And yet so many discussions focus on something done for a ‘hot’ website or maybe a large ‘maverick’ team in a large organization.  And those maverick team solutions more often than not do not scale to the enterprise and have to be replaced.  Of course, that is rarely discussed or hyped.  They just sort of fade away.

It is not that these faded solutions are bad or wrong, either.  A lot of times, the issue is simply that they only looked at part of the problem and did not consider the impact of improving that part on the other parts of the organization.  In large synchronized systems, you can only successfully accelerate or decelerate if the whole context does so together.  There are many over-used analogies for this scenario, so let’s use the one about rowers on a boat to make this point and move on.

Let’s face it, large organizations can appear to be the “poster children” for silos in their organizational structure.  You have to remember, though that those silos often exist because the organization learned hard lessons about the value of NOT having someone maniacally focused on one narrow, specialized activity.  Think about this as your organization grows, runs into problems or failures, and puts infrastructure in place to make sure it doesn’t happen again.

One of the main points of this blog is going to be to look at the issues confronted by organizations that are or are becoming ‘enterprises’ and how they can balance the need for the Agile flexibility of DevOps with the pragmatic need to synchronize large numbers of people.