RSS feed



Get free e-mail updates

Why incremental delivery is a business concern first, technical a distant second

in

One of the most poorly understood concepts in product/software development is incremental delivery. “Waterfall” organisations certainly don’t get it, but neither do most who claim to be using/doing/being “Agile”, “Kanban”, “Lean” or whatever the flavour and buzzword of the month is.

If you ask Agile proponents, you will usually get answers along the lines of “delivering value incrementally” or “de-risking technical delivery” through smaller batches.
Those who are further along the scale of “getting it” may mention incremental delivery is about building the right thing and discovering along the road what exact set of features actually solves the underlying problem rather than early speculation about what will. This school of thought is far closer to the real benefits of incremental delivery, but they are still not quite hitting the nail on the head.

Incremental delivery, the insufficient bridge building analogy

An analogy that I have sometimes seen to incremental delivery of software is that of bridge building. The story goes something like:

  • If you are building a river crossing as an Agile project, you first take people over a few at a time on a small raft.
  • Eventually you swap the raft for a boat that can carry a few more people.
  • Then maybe you build a simple, but rickety bridge to get even more people over.
  • Eventually you build a massive suspension bridge that will support all the traffic it needs.

All the while you have worked yourself towards having a big bridge that supports all people crossing, you have had other means of getting people across while you have scaled up the amount of people you can get across in one go.
However, this analogy is a bit unsatisfactory, if you know the object is to get people across the river, and you know you need to get a certain amount of people over, why don’t you just bite the bullet and build a sufficient bridge immediately?

Bridge to Nowhere

Bridge to Nowhere

It’s about measuring demand & ensuring the problem is worth solving to begin with!

However, the points that almost no one get, and the main reasons to deliver incrementally are actually:

  1. Measure whether there is actually demand for the problem to be solved in the first place!
  2. Measure whether there is actually demand for the problem to be solved the way your solution solves it!

Those are the only two reasons, nothing else! If you do not get this, to go back to our bridge building analogy, you may end up building Bridges to Nowhere, finding out only after great effort and expense that there was in fact no one who wanted to cross that river in the first place.

Incremental delivery is crucial to prove the business case and value hypothesis for why you are building something in the first place!

But maybe you know exactly what the problem is and how it should be solved?
Well, chances are, even if you are scratching your own itch, that you don’t know. 8 out of 10 new businesses fail.
Whether you are building consumer software for a start-up or doing internal systems integration to be used in the cavernous depth of an enterprise megacorp, software development is new product development. This means the context in which it is build is its “market”, and the product development itself is a “start-up”, even if no one outside will ever know its there.

If you are delivering software, you are delivering a new product of some description. Whether you want it or not, market forces are at work:

  • Is there sufficient demand for the proposed problem to be solved?
  • Is there sufficient demand for the problem to be solved the way you are solving it?
  • Is the demand sufficient to cover and exceed the development cost?

Have you ever come across or heard of an internal initiative or application that was eventually abandoned because no one used it? Market forces. The application or system not being used was evidence that there was no internal demand within the organisation for what it did.

The best way to address these questions is to run your software delivery as a series of experiments, soliciting market feedback to prove or disprove your value hypothesis & business case.

Market risk is the primary concern, decreased delivery risk & incremental value are secondary effects

So lets sum things up: decreasing delivery risk through smaller batches of delivery is still a great benefit of incremental delivery. But it is a secondary concern compared to addressing whether what is being build is worth building.

Delivering value incrementally is a potential benefit of incremental delivery, IF it turns out that you are building something worthwhile. But you are actually only delivering value once you have achieved some level of Product-market fit and you are starting to transition into growth.

Until the point that you have actually proven the worthiness of the problem to be solved, and the solution to solve the problem, you are just dealing with the sunk cost of a series of experiments to try to prove a value hypothesis.

Those who have read this far may already have realised that most software deliveries, even those claiming to do incremental delivery are effectively stabbing in the dark, like a drunken gambler at a casino, they are putting all their money on one throw of the dice, one attempt at proving value, then more often than not wondering what went wrong and where all the money went.

How Ansible & Docker fit: Using Ansible to bootstrap & coordinate Docker containers

in

There are a lot of exciting tools in the infrastructure & virtualisation space that have emerged in the last couple of years. Ansible & Docker are probably two of the most exciting ones in my opinion. While I’ve already used Ansible extensively, I’ve only started to use Docker, so that’s the big caveat emptor with regards to the contents of this post.

What’s Docker & why should I care?

Docker describes itself as a “container platform”, which at a first glance can be easily confused with a VM. Wikipedia describes Docker containers in the following way:

Docker uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting virtual machines.

My take on where Docker is useful and not useful:

1. As a “process”/application container

Docker is useful as a container for distinct processes or groups of processes that make up an application. A typical example of this would be using a container that runs an nginx server that has https configured, serves up static content and javascript, as well as running a webapp behind the nginx server that serves up dynamic pages or RESTful services.
This may be especially useful where you are running applications or processes that should be isolated for security purposes, but can share powerful hardware without the overhead of a full VM and where the potential CPU or memory contention of a container is not an issue (on underutilised hardware, this is potentially even desirable to not have a bunch of tin standing around mostly idling).

Though the real power in Dockers “container” and “image” concept is that it makes it trivial to share an application container with all its moving parts pre-configured easily as code (see Dockerfiles. Dealing with infrastructure as code and sharing it effectively between teams could be really disruptive when it comes to breaking down barriers to cross-team collaboration in larger organisations.

Docker as an app container

Docker as an app container

2. As a disposable sandboxed environment to run user inititiated, potentially dangerous processes or data

This is actually where Docker comes in for my day-to-day use on our Code Qualified service for automated programmer testing, which we can use as an example - when users submit their solutions to programming problems, the code has to be run, analysed and checked for correctness. However, a malicious user could potentially submit malicious code to Code Qualifieds servers to try to execute just about any nasty, destructive operation. This is where Docker comes in: we run submitted code on temporary Docker containers, which means at worst they will destroy a temporary container that is only intended to live for a few minutes and score a big zero on the test they have “solved”.

3. Do NOT try to use Docker as a VM replacement or to run a “entire systems”

Finally, we get to what not to do with Docker: don’t try to run it as a full OS/system of sorts. It is not what it is meant for. As Phusion have pointed out, the base Docker images available lack a number of important system settings & services, so trying to run Docker containers as a substitute for a “real” system/VM can be fraught with potential issues.

I can run commands in my Dockerfile, why use Ansible?

You can run arbitrary commands in a Dockerfile that builds a Docker image, among them apt-get if you are building an image based on Ubuntu. So why is Ansible relevant?

1. Bootstraping Docker containers

I knocked together a very simple example on GitHub that uses Ansible to set up a Vagrant VM that then runs Docker containers where the images are build in part by.. Ansible (the Vagrant part isn’t relevant if you’re on Linux. I used it to set up a VM to run Docker on as OS X doesn’t support Docker natively).
While it’s entirely possible to script everything in the Dockerfile with the RUN directive, I find Ansible scripts useful for a couple of reasons:

  • Ansible scripts are portable. I can test them on a Vagrant VM, on an AWS EC2 instance, on a “real” Linux machine or on Docker.
  • Ansible scripts are idempotent: if you run them again later against a container/VM/machine, they can act as a test that the box is in fact properly set up (and fix anything that isn’t).
  • Ansible scripts can provision multiple hosts concurrently: this is what your Ansible inventory-file is for.

To run Ansible while building a Docker image, you can create the following stub inventory file:

[local]
localhost

Then make your Dockerfile look something like this (where you have an Ansible script called “provision.yml” in the same directory, in this case likely installing nginx and setting it up correctly):

FROM ubuntu:14.04.1
MAINTAINER Wille Faler "wfaler@recursivity.com"
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN apt-add-repository ppa:ansible/ansible
RUN apt-get update
RUN apt-get install -y ansible
ADD inventory-file /etc/ansible/hosts
ADD provision.yml provision.yml
RUN ansible-playbook provision.yml -c local
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx"]

2. Setting up and coordinating Docker hosts

Ansible has a Docker module, which you can use to build, run, start, stop, link and coordinate your Docker containers & images with in various ways. This is where Ansible really shines, and it will almost always be preferable to handcrafted, fragile shell-scripts.

In fairness, I haven’t really used the Docker module that much in anger yet, though I suspect I will use it extensively eventually. These are the sort of tasks where Ansible is really brilliant even without Docker in the picture, so I wouldn’t expect that to change one bit with Docker. In fact I suspect Ansible will become even more integral to running infrastructure when you have multiple hosts running multiple Docker containers.

What else is there?

I have only really scratched the surface here, but it is a brief summary of my understanding so far of how tools like Ansible and Docker fit into the infrastructure eco-system - I expect my understanding and views to evolve over time as I get deeper into it.

I haven’t even started looking at things like Core OS, that together with etcd & fleet could prove to be interesting and potentially valuable building blocks, but I’ll leave that for another day, when I have had the time to explore it more deeply.

Relaunch of the Recursivity website

in

This site & blog hasn’t received a lot of love recently. As a consequence, I’m rebooting it altogether. The design is the change (though might change over time), but how it is made has changed entirely. Previously, I had my own hand-crafted site-generator based on Scala generating the pages. It did its job well enough, but it was pretty limited and not something I felt like maintaining over time when better alternatives exist out there. So few weeks ago, I started playing around with Hakyll, and I found myself able to replicate most of what my own site-generator did in a manner of hours, so now, the entire site is generated with Hakyll.

I’ve put the entire site, including the Haskell code associated on a GitHub repo. The content itself will obviously be copyrighted, but feel free to take the code and do with it as you please!

Also, a big thanks to Abizern, whose Hakyll code I followed and partially ripped off.



Older posts in our Archive