RSS feed

Get free e-mail updates

Technical debt as "venture investment" - software should be disposable


I have already written at length about the great uncertainty in knowing whether writing software is worthwhile in the first place.
To re-cap my previous point, I claim that it is impossible to know, even with relatively rich information, whether software is solving a worthwhile problem in the first place, and whether that problem has enough demand for a solution until that software is being widely adopted and used (or as the case often is: not used at all). Software is effectively new product development, every time, unless it is merely replacing existing software.

In effect, for every new piece of software, market risk is the main risk, not delivery risk, and we should address this risk in similar ways to how successful new businesses address this: by validating on the market.

There is good technical debt & bad technical debt

Technical debt in our industry has a bad reputation, most people are aggressively against it, yet technical debt is as pervasive as anything, almost all code has it. But I would claim that just like there is good debt (useful capital investments) and bad debt in the economy (loans for private luxury consumption), the same types exist for technical debt.
Furthermore, I would challenge anyone to solve a problem they are entirely unfamiliar with without producing any technical debt whatsoever at any point. Technical debt is a natural thing in product development - you just have to make sure you pay it down before it becomes a real burden.

What are some examples of good technical debt?

  • Debt that helps you acquire knowledge about the market/problem you are addressing.
  • Debt that helps you better understand how to solve the underlying problem.
  • Debt that helps you get to market faster to get feedback.

Examples of bad debt that need to be repaid immediately:

  • Debt that slows your progress down.
  • Debt that jeopardises the reliability of the software.
  • Debt that is taken on for political reasons to satisfy non-customer/user stakeholders.

My point is, that just like with navigating scope, if there is a choice between getting working software into the hands of users sooner, or taking significantly longer “perfecting” something, getting a working product into the hands of customers is almost always preferable. I am not suggesting to take technical debt lightly, simply that a more nuanced view on it is preferable.

Code has little value, knowledge on how to solve the problem is priceless

Corollary to the points made above and in my previous post is my very firm belief that most software should be written to eventually be disposed of. The easier it is to dispose of a piece of software, the better.

Your version 1.0 of an attempt at a solution to a problem will almost always be largely wrong, littered with technical debt, usability problems and other misalignments. So better plan for being partly wrong and eventually having to replace large parts of it.
The value of version 1.0 is rarely in the code produced, but in the shared knowledge acquired through the process among those involved in producing it (including business stakeholders and users).

“Let’s rewrite it!” is thrown around far too often in software, yet sometimes appropriate, but actually rarely done under the right circumstances, in the right way.  Firstly, rewrites, when they occur are mostly done wrong, in a “big bang”-manner, rather than carefully and incrementally. There are many, perfectly good ways of replacing a system where you can have a full and complete system in place from day 1 which is carefully migrated feature-by-feature. Incidentally, a careful and incremental migration is often indistinguishable from a careful “system level refactoring” - where does replacement/rewrite begin and refactoring end? You tell me.
Secondly, when rewrites are suggested by people who are new to a problem, who where not involved in the creation of the initial solution, beware! If you are intending to replace something without making use of the knowledge acquired during the process of creating the initial solution, what are you hoping to achieve?

So in summary: market risk (whether software has value in the hands of users) is the primary risk of software delivery. To address this, technical debt is often unavoidable, and in some cases even desirable. To address the risk of technical debt, write software so that it is easy to dispose of instead of being too hung up on the sunk cost. In terms of organisational value on the capital investment of product development, the value often lies in the knowledge acquired by those involved rather than in the code itself.
To make a (possibly bad & inappropriate) analogy to manufacturing: the plants and factories of digital products is not the pre-existing code, but the brains of those who truly understand how the problem and the market fit together and how the problem is solved. The price/investment required to get to that point is the uncertain process of creating a product and getting it into the hands of customers and users & repeating until they start realising the value of your efforts.

Trying to fix the EU VAT MOSS mess


If you run a business that sells digital products or services, you are probably aware that the EU is instituting some rather radical changes to how Value Added Tax works for products and services wholly delivered online. To sum it up: previously, VAT was to be paid in the country where a business was located. If you where a UK business selling to consumers, you would charge UK VAT to consumers in the EU regardless of where in the EU they where located.

This is all about to change.
From January 1st 2015, if your product is an “e-service”, in other words sold and delivered online in an automated way, you need to start charging VAT at the prevailing rate in the country you are delivering to. You also need to store two, preferably three pieces of non-contradictory evidence proving that the service was indeed delivered to the country you claim. If that isn’t enough, your VAT report now has to reflect how much VAT chargeable sales you have delivered to each and every EU country. A final kicker is that businesses that previously had a low turnover below a certain threshold didn’t have to charge or report VAT, now have to do so, from the first pound or Euro of digital sales.

I probably don’t even have to begin explaining just how business/entrepreneur unfriendly these new rules are. Others have written at length about this aspect of the new legislation.

Ibuprofen for your VAT headache

Recursivity is a small software company, we have software products of our own for which this becomes a headache. Luckily we knew about this, understand how tax works and have been somewhat prepared. We wrote a solution for this problem for internal use sometime ago, and as the noise around VAT MOSS (as this initiative is called) became louder, we realised that there might be some mileage to sharing what we’ve built internally to help others.

So that’s exactly what we’ve done, by launching VAT Boss (see the pun in there?) - a platform where you’ll be able to:

  • Calculate the correct VAT rate to apply given location evidence.
  • Validate location evidence, such as ensuring VAT numbers are legitimate when supplying businesses, geo-locating IP addresses, validating phone number locations.
  • Store VAT evidence for reporting.
  • Generate VAT reports for regulatory compliance.

If you want to have access to this, Sign up now, we’ll be opening the doors over the next week or two.

This of course is no panacea, especially not if your company lacks tech savvy people - while we’re looking to make this as easy as possible to integrate into your existing online sales, it will inevitably require exactly that: some integration. Our initial efforts are aimed at having a simple API and simple reporting, which should be available very soon.

Laughing at the EU? Not so fast

If you are outside the EU, you might find this all amusing that the EU does this to its businesses. Not so fast: if you are a business outside the EU (say the US or anywhere else), but you sell to EU consumers, the rules apply to your business too. It might be rational to think “If I don’t comply, what are they going to do about it?”, but at the same time, it might not be a line worth taking if you end up in their crosshairs.

We have no great love of the new VAT MOSS rules. But we’d rather come up with solutions than just complain or ignore the impending problem. VAT Boss is not a solution we expect to make a lot of money off, especially if the EU eventually regains its sanity, but at the same time it is a solution we will use internally, and that we might as well share with the outside world for a fee.

We need Functional Programming BECAUSE a large subset of us are average or below


There has been a meme going around for several years in the programming community that “average programmers” don’t get functional programming (often from functional programmers) or that “functional programming is too complex for the average programmer” (often from those unwilling to learn functional programming).

I would contend that functional programming is an absolutely crucial evolution precisely because a large subset of us are average or below average. If you understand the definition of “average” you shouldn’t be offended by me calling a large number of programmers average or below.

The breakdown of imperative programming under growing complexity

Jessitron has written a great post on two models of computation which largely dovetails with my argument, which I will elaborate below.
Imperative programming breaks down under increasing complexity and richness, consider the following imperative example in Java:

public void addBar(Foo foo){
    Bar bar = new Bar("baz");
  • Will this code work?
  • Does foo need to be updated before bar is saved, or the other way around?
  • Can we trust that any of the methods called only do what the names imply?

The answer to all three questions is: you couldn’t possibly know without holding a complete model of the entire application in your head together with knowledge of the libraries it uses!
Can you imagine the cognitive load this puts on your brain when you have to create a mental model of a piece of codes entire universe to be able to reason about it in a meaningful way?

Some would argue that you should “just know” how your libraries work, what dao does etc. But this is precisely my point: this is unecessary cognitive strain on your brain, brain cycles going into trying to remember intricacies of other code, brain cycles that could more productively put to use to reason about the problem at hand.

Some would argue that you should “be disciplined” and follow a precise set of codified practices to ease the strain. Fair enough, assuming you follow these rules, and everyone else does so as well (which is quite a bold assumption in my experience), you now only have to keep X hundred rules in your head to reason about the code. Still: a very high cognitive load to put on your brain when good tools and languages could offload this.

Functional programming to the rescue!

I am not going to define functional programming in its entirety here (others have done it better), but if we assume the following properties hold true for FP:

  • Immutable data: an assigned value or the values of which it is composed do not change over its lifetime.
  • Referential transparency: The same inputs to a function always return the same result and do nothing else.
  • Higher-order functions: Functions can take functions as arguments, and return functions. Functions are first class values.

Now lets look at the simplest possible piece of code, calculating y by applying the function f to x:

let y = f x

If the 3 properties hold true, we know that for f(x), we will always get the same y and the computation of y is the only thing that occurs - no nuclear missiles are launched.
We also know that we couldn’t say x = f x, because this simply wouldn’t make any sense - x already exists and can’t be re-assigned.

Ok, that’s all pretty obvious, but what’s the point? Well, all of a sudden you achieve the following:

  • You do not have to build a mental model of the whole system and its libraries to reason about it.
  • Corollary, you can reason about any subset of the system, however small, without having to even understand what the whole system does.

The benefits of this hardly have to be enumerated, but I’m going to enumerate a few practical ones anyway:

  • You don’t have to rely on your memory of something you did possibly years ago to deal with new requirements, changes or to track down bugs.
  • For the managers out there: key-person dependencies (“the guy who knows everything about the system”) become less of an issue.
  • New people can become productive much faster, no longer having to spend the time building a mental model of everything.

Dare I say it? For the bean-counters and risk-managers out there, functional programming makes people working on a system more interchangeable as long as they have the requisite knowledge to understand and reason with the general concepts used in a system.

Towards Functional Programming: mostly a question of humility, education & self-education

After slowly immersing myself in functional programming over the last 6-7 years or so, coming across various organisations and a multitude of people in many stages of FP adoption or resistance, I’ve learned that sometimes slow up-take and resistance to functional programming has nothing to do with the usual arguments bandied about. They broadly fall into two categories:

Pride & prestige: people who may have spent decades in their field becoming recognised as “experts” may be reluctant to let go of their expert label, or admit to themselves or others that they may yet again be beginners and have plenty to learn. Vested interest in the status quo is always strong in any field, and the humility it takes to say “I’m a beginner again in this subset of my chosen field” is especially hard to find for those who have put in the greatest investment in the way things are.

Conflating unfamiliarity with complexity: Another common mental trap is to think that which is unfamiliar is not actually unfamiliar at all, but complex or complicated. I think this one is closely related to pride - unwillingness to learn something new and admit to even yourself that you may not be an expert in everything programming related makes it far easier to just declare something unfamiliar as “complex”.

Functional programming is in fact not any harder than imperative programming. As I have explained in this post, functional programming in fact makes things far easier, far less complex, and gives us the tools to better deal with complexity in the large.
In 2014, there is absolutely no reason for you to put the strain on your brain to have to build a mental model of an entire system in your head to do your job as a programmer productively. This is a loosing cause, as eventually the size of human working memory will simply make it impossible for anyone to deal with it all.
Nor is there any reason for us to persist in using an inappropriate level of abstraction loosely based on the physical workings of manipulating silicon memory. Let us move into the 21st century once and for all.

Why incremental delivery is a business concern first, technical a distant second


One of the most poorly understood concepts in product/software development is incremental delivery. “Waterfall” organisations certainly don’t get it, but neither do most who claim to be using/doing/being “Agile”, “Kanban”, “Lean” or whatever the flavour and buzzword of the month is.

If you ask Agile proponents, you will usually get answers along the lines of “delivering value incrementally” or “de-risking technical delivery” through smaller batches.
Those who are further along the scale of “getting it” may mention incremental delivery is about building the right thing and discovering along the road what exact set of features actually solves the underlying problem rather than early speculation about what will. This school of thought is far closer to the real benefits of incremental delivery, but they are still not quite hitting the nail on the head.

Incremental delivery, the insufficient bridge building analogy

An analogy that I have sometimes seen to incremental delivery of software is that of bridge building. The story goes something like:

  • If you are building a river crossing as an Agile project, you first take people over a few at a time on a small raft.
  • Eventually you swap the raft for a boat that can carry a few more people.
  • Then maybe you build a simple, but rickety bridge to get even more people over.
  • Eventually you build a massive suspension bridge that will support all the traffic it needs.

All the while you have worked yourself towards having a big bridge that supports all people crossing, you have had other means of getting people across while you have scaled up the amount of people you can get across in one go.
However, this analogy is a bit unsatisfactory, if you know the object is to get people across the river, and you know you need to get a certain amount of people over, why don’t you just bite the bullet and build a sufficient bridge immediately?

Bridge to Nowhere

Bridge to Nowhere

It’s about measuring demand & ensuring the problem is worth solving to begin with!

However, the points that almost no one get, and the main reasons to deliver incrementally are actually:

  1. Measure whether there is actually demand for the problem to be solved in the first place!
  2. Measure whether there is actually demand for the problem to be solved the way your solution solves it!

Those are the only two reasons, nothing else! If you do not get this, to go back to our bridge building analogy, you may end up building Bridges to Nowhere, finding out only after great effort and expense that there was in fact no one who wanted to cross that river in the first place.

Incremental delivery is crucial to prove the business case and value hypothesis for why you are building something in the first place!

But maybe you know exactly what the problem is and how it should be solved?
Well, chances are, even if you are scratching your own itch, that you don’t know. 8 out of 10 new businesses fail.
Whether you are building consumer software for a start-up or doing internal systems integration to be used in the cavernous depth of an enterprise megacorp, software development is new product development. This means the context in which it is build is its “market”, and the product development itself is a “start-up”, even if no one outside will ever know its there.

If you are delivering software, you are delivering a new product of some description. Whether you want it or not, market forces are at work:

  • Is there sufficient demand for the proposed problem to be solved?
  • Is there sufficient demand for the problem to be solved the way you are solving it?
  • Is the demand sufficient to cover and exceed the development cost?

Have you ever come across or heard of an internal initiative or application that was eventually abandoned because no one used it? Market forces. The application or system not being used was evidence that there was no internal demand within the organisation for what it did.

The best way to address these questions is to run your software delivery as a series of experiments, soliciting market feedback to prove or disprove your value hypothesis & business case.

Market risk is the primary concern, decreased delivery risk & incremental value are secondary effects

So lets sum things up: decreasing delivery risk through smaller batches of delivery is still a great benefit of incremental delivery. But it is a secondary concern compared to addressing whether what is being build is worth building.

Delivering value incrementally is a potential benefit of incremental delivery, IF it turns out that you are building something worthwhile. But you are actually only delivering value once you have achieved some level of Product-market fit and you are starting to transition into growth.

Until the point that you have actually proven the worthiness of the problem to be solved, and the solution to solve the problem, you are just dealing with the sunk cost of a series of experiments to try to prove a value hypothesis.

Those who have read this far may already have realised that most software deliveries, even those claiming to do incremental delivery are effectively stabbing in the dark, like a drunken gambler at a casino, they are putting all their money on one throw of the dice, one attempt at proving value, then more often than not wondering what went wrong and where all the money went.

How Ansible & Docker fit: Using Ansible to bootstrap & coordinate Docker containers


There are a lot of exciting tools in the infrastructure & virtualisation space that have emerged in the last couple of years. Ansible & Docker are probably two of the most exciting ones in my opinion. While I’ve already used Ansible extensively, I’ve only started to use Docker, so that’s the big caveat emptor with regards to the contents of this post.

What’s Docker & why should I care?

Docker describes itself as a “container platform”, which at a first glance can be easily confused with a VM. Wikipedia describes Docker containers in the following way:

Docker uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting virtual machines.

My take on where Docker is useful and not useful:

1. As a “process”/application container

Docker is useful as a container for distinct processes or groups of processes that make up an application. A typical example of this would be using a container that runs an nginx server that has https configured, serves up static content and javascript, as well as running a webapp behind the nginx server that serves up dynamic pages or RESTful services.
This may be especially useful where you are running applications or processes that should be isolated for security purposes, but can share powerful hardware without the overhead of a full VM and where the potential CPU or memory contention of a container is not an issue (on underutilised hardware, this is potentially even desirable to not have a bunch of tin standing around mostly idling).

Though the real power in Dockers “container” and “image” concept is that it makes it trivial to share an application container with all its moving parts pre-configured easily as code (see Dockerfiles. Dealing with infrastructure as code and sharing it effectively between teams could be really disruptive when it comes to breaking down barriers to cross-team collaboration in larger organisations.

Docker as an app container

Docker as an app container

2. As a disposable sandboxed environment to run user inititiated, potentially dangerous processes or data

This is actually where Docker comes in for my day-to-day use on our Code Qualified service for automated programmer testing, which we can use as an example - when users submit their solutions to programming problems, the code has to be run, analysed and checked for correctness. However, a malicious user could potentially submit malicious code to Code Qualifieds servers to try to execute just about any nasty, destructive operation. This is where Docker comes in: we run submitted code on temporary Docker containers, which means at worst they will destroy a temporary container that is only intended to live for a few minutes and score a big zero on the test they have “solved”.

3. Do NOT try to use Docker as a VM replacement or to run a “entire systems”

Finally, we get to what not to do with Docker: don’t try to run it as a full OS/system of sorts. It is not what it is meant for. As Phusion have pointed out, the base Docker images available lack a number of important system settings & services, so trying to run Docker containers as a substitute for a “real” system/VM can be fraught with potential issues.

I can run commands in my Dockerfile, why use Ansible?

You can run arbitrary commands in a Dockerfile that builds a Docker image, among them apt-get if you are building an image based on Ubuntu. So why is Ansible relevant?

1. Bootstraping Docker containers

I knocked together a very simple example on GitHub that uses Ansible to set up a Vagrant VM that then runs Docker containers where the images are build in part by.. Ansible (the Vagrant part isn’t relevant if you’re on Linux. I used it to set up a VM to run Docker on as OS X doesn’t support Docker natively).
While it’s entirely possible to script everything in the Dockerfile with the RUN directive, I find Ansible scripts useful for a couple of reasons:

  • Ansible scripts are portable. I can test them on a Vagrant VM, on an AWS EC2 instance, on a “real” Linux machine or on Docker.
  • Ansible scripts are idempotent: if you run them again later against a container/VM/machine, they can act as a test that the box is in fact properly set up (and fix anything that isn’t).
  • Ansible scripts can provision multiple hosts concurrently: this is what your Ansible inventory-file is for.

To run Ansible while building a Docker image, you can create the following stub inventory file:


Then make your Dockerfile look something like this (where you have an Ansible script called “provision.yml” in the same directory, in this case likely installing nginx and setting it up correctly):

FROM ubuntu:14.04.1
MAINTAINER Wille Faler ""
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN apt-add-repository ppa:ansible/ansible
RUN apt-get update
RUN apt-get install -y ansible
ADD inventory-file /etc/ansible/hosts
ADD provision.yml provision.yml
RUN ansible-playbook provision.yml -c local
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
CMD ["nginx"]

2. Setting up and coordinating Docker hosts

Ansible has a Docker module, which you can use to build, run, start, stop, link and coordinate your Docker containers & images with in various ways. This is where Ansible really shines, and it will almost always be preferable to handcrafted, fragile shell-scripts.

In fairness, I haven’t really used the Docker module that much in anger yet, though I suspect I will use it extensively eventually. These are the sort of tasks where Ansible is really brilliant even without Docker in the picture, so I wouldn’t expect that to change one bit with Docker. In fact I suspect Ansible will become even more integral to running infrastructure when you have multiple hosts running multiple Docker containers.

What else is there?

I have only really scratched the surface here, but it is a brief summary of my understanding so far of how tools like Ansible and Docker fit into the infrastructure eco-system - I expect my understanding and views to evolve over time as I get deeper into it.

I haven’t even started looking at things like Core OS, that together with etcd & fleet could prove to be interesting and potentially valuable building blocks, but I’ll leave that for another day, when I have had the time to explore it more deeply.

Older posts in our Archive