Addressing the Agile Deployment Impedance Mismatch

Chocolate assembly line

Organizations have now spent years moving to agile development.  In many ways, that has been a positive movement.  Anything that focuses on delivering the highest priority items first is a good thing.  And having something ready to deploy at the end of each sprint is a huge improvement over waiting for 6 months or more before seeing anything working. But does that mean those new features actually get delivered to real users?

Often, the answer is no.  While your development teams may crank out a releasable product every two or three weeks, your operations — and change management — teams aren’t interested in deploying that often (think of Lucy and Ethel at the chocolate factory).  Because deployment presents a risk.  Because releases are not fully tested and certified.  Because deployments are hard.

Let’s address these one by one.

Risk: Deployments in a traditional environment are risky for 2 reasons: they are rare and they are highly manual processes.  The former is a product of the latter.  Automating a deployment process achieves 2 things:

  1. it completely and accurately documents the process, far better than any installation document can ever hope to
  2. it ensures that it will always be executed the same way

These 2 things should combine to relieve anxiety over errors during deployment.  There are a lot of automation tools that can help here (more on this below) and the principal centers around infrastructure-as-code.  Creating infrastructure (servers, disk, network, middleware, databases, etc.) through code execution, rather than human processes, means that quality (good or bad) will be consistent.  When your quality becomes consistent (and good), rarity of deployment should go away.

Testing and certification: Teams that rely on human testing should never automatically deploy code.  It’s a recipe for failure.  The only way to get to continuous deployment is by having an automated test suite that ensures implementation matches requirements.  Adding this at the end of a project, or even in the middle, is nearly impossible.  It has to start before coding begins.  Test-driven design and behavior-driven design are key to achieving reasonable certainty that an application meets requirements.  Another is to adopt a microservices architecture, so that testing each component becomes simpler.  Think separation of concerns.

Difficulty: This is where cloud automation tools come in.  It’s important to push these tools all the way back to the developer from day one so that all environments from DEV to PROD are always provisioned automatically.  And the application, and any associated artifacts should be deployed via the same path.  There are a variety of tools in this arena, including Chef, Puppet, Ansible, vagrant, bosh, HP Operations Orchestration and more.  A new one is called OneOps from Walmart Labs and it shows a lot of promise in delivering a full lifecycle management tool with a friendly interface via the open source community.  I hope to add another post with a deeper dive here later.  But for now you can watch Andy Cohan (Avalon’s Midwest Regional Technology Director) demo an end-to-end use of OneOps.

In looking at automation tools, a few things are important: an easy to understand lifecycle, a simple interface; the ability to run locally; and reusable components.  Without these, getting buy-in from development and operations can be difficult.

NOTE: Sign up here for Avalon’s DevOps and OneOps mailing list to receive timely notification when we publish tips, tricks, and videos related to increasing your success with DevOps initiatives. 

About Sean Dowd

Sean Dowd is VP and Chief Architect at Avalon Consulting, LLC.

Leave a Comment

*