RSS feed



Get free e-mail updates

Random thoughts: Javascript frameworks, do we need them?

in

Javascript frameworks have been all the rage over the last few years if you have the misfortune of developing for web browsers (I say misfortune because Javascript is an awful language).
As far as I can tell, it started with Backbone, moved onto things like Knockout,Ember and of course, Angular.

I can understand the original reasons for the emergence of Javascript frameworks: only a few years ago Javascript support across browsers was fragmented, lots of hacks had to be applied to do simple things in a cross-browser compatible way. Javascript was in addition to being an awful language, also fairly low-level in that native APIs made simple things difficult and laborious.

These days though? I’m not so sure.
Over the last year, I’ve built a couple of one page applications that made fairly extensive use of Angular.js, and after a steep learning curve, I’m now fairly comfortable with all of its ins and outs. With the benefit of hindsight I wonder what benefits it actually brought to the table beyond library lock-in.
Off the top of my head, I could argue that Angular provides:

  • Routing
  • Templating
  • Two-way data-binding

However, I can think of smaller, simpler libraries that do much the same without imposing the same level of ceremony or library dependence: Sammy.js for routing, JQuery and Handlebars for templating and general functionality. Hardly fashionable, not at all flavours of the day. Probably outright boring choices in the minds of most people having to deal with Javascript. Nonetheless, using these smaller, more focused libraries, I’m not exactly missing anything fundamental from Angular other than possibly data-binding, but this is a gap easily filled with a small amount of code.

I can full well see why the heavier Javascript libraries have served a purpose in furthering browser-based development: they used to paper over fragmented, inconsistent browser behaviour. They’ve one their bit to establish/thrash out good and bad patterns of browser-based development and hide some of the horrors of Javascript and browsers.
But, my gut feeling is that we are reaching peak Javascript-frameworks - the need for them is quickly diminishing, and smaller more focused libraries can do the same thing just as well if not better, even if those smaller libraries are sometimes older and less fashionable.

JVM and Javascript eco-systems, parallel stories

In a way, the Javascript eco-systems development and maturing over the last 5 years or so has to some degree mirrored the development of the JVM eco-system: we’ve gone through the peaks and troughs of heavy-weight frameworks to start moving back to simpler ways with smaller, focused libraries.

The evolution that followed on the JVM was that Java the language became less important with the emergence of Scala and Clojure. The most exciting development in browser-based development for a very long time mirrors that exactly: no longer being dependent on writing Javascript, but having the option of compiling to Javascript from more sane languages that are popping up all over the place.
The most exciting and sane of these languages that I am aware of is no doubt Purescript, a Haskell inspired type-safe language with very easy to use Foreign Function Interface to Javascript.

What’s my point? Well, as a blog post goes, this is more of a brain-dump of random thoughts that I’ve had over the last few months around the eco-system rather than a logically stringent argument. But the conclusion that is starting to form in my head is that the big all-singing, all-dancing frameworks will give way for smaller, more focused libraries, in some cases the same old libraries we used before the big frameworks came round. Just like for so many other languages and eco-systems, it is slowly dawning that small is beautiful and big monolithic frameworks are more pain than they are worth - be clear about your requirements and pick small libraries that fit them.

Technical debt as "venture investment" - software should be disposable

in

I have already written at length about the great uncertainty in knowing whether writing software is worthwhile in the first place.
To re-cap my previous point, I claim that it is impossible to know, even with relatively rich information, whether software is solving a worthwhile problem in the first place, and whether that problem has enough demand for a solution until that software is being widely adopted and used (or as the case often is: not used at all). Software is effectively new product development, every time, unless it is merely replacing existing software.

In effect, for every new piece of software, market risk is the main risk, not delivery risk, and we should address this risk in similar ways to how successful new businesses address this: by validating on the market.

There is good technical debt & bad technical debt

Technical debt in our industry has a bad reputation, most people are aggressively against it, yet technical debt is as pervasive as anything, almost all code has it. But I would claim that just like there is good debt (useful capital investments) and bad debt in the economy (loans for private luxury consumption), the same types exist for technical debt.
Furthermore, I would challenge anyone to solve a problem they are entirely unfamiliar with without producing any technical debt whatsoever at any point. Technical debt is a natural thing in product development - you just have to make sure you pay it down before it becomes a real burden.

What are some examples of good technical debt?

  • Debt that helps you acquire knowledge about the market/problem you are addressing.
  • Debt that helps you better understand how to solve the underlying problem.
  • Debt that helps you get to market faster to get feedback.

Examples of bad debt that need to be repaid immediately:

  • Debt that slows your progress down.
  • Debt that jeopardises the reliability of the software.
  • Debt that is taken on for political reasons to satisfy non-customer/user stakeholders.

My point is, that just like with navigating scope, if there is a choice between getting working software into the hands of users sooner, or taking significantly longer “perfecting” something, getting a working product into the hands of customers is almost always preferable. I am not suggesting to take technical debt lightly, simply that a more nuanced view on it is preferable.

Code has little value, knowledge on how to solve the problem is priceless

Corollary to the points made above and in my previous post is my very firm belief that most software should be written to eventually be disposed of. The easier it is to dispose of a piece of software, the better.

Your version 1.0 of an attempt at a solution to a problem will almost always be largely wrong, littered with technical debt, usability problems and other misalignments. So better plan for being partly wrong and eventually having to replace large parts of it.
The value of version 1.0 is rarely in the code produced, but in the shared knowledge acquired through the process among those involved in producing it (including business stakeholders and users).

“Let’s rewrite it!” is thrown around far too often in software, yet sometimes appropriate, but actually rarely done under the right circumstances, in the right way.  Firstly, rewrites, when they occur are mostly done wrong, in a “big bang”-manner, rather than carefully and incrementally. There are many, perfectly good ways of replacing a system where you can have a full and complete system in place from day 1 which is carefully migrated feature-by-feature. Incidentally, a careful and incremental migration is often indistinguishable from a careful “system level refactoring” - where does replacement/rewrite begin and refactoring end? You tell me.
Secondly, when rewrites are suggested by people who are new to a problem, who where not involved in the creation of the initial solution, beware! If you are intending to replace something without making use of the knowledge acquired during the process of creating the initial solution, what are you hoping to achieve?

So in summary: market risk (whether software has value in the hands of users) is the primary risk of software delivery. To address this, technical debt is often unavoidable, and in some cases even desirable. To address the risk of technical debt, write software so that it is easy to dispose of instead of being too hung up on the sunk cost. In terms of organisational value on the capital investment of product development, the value often lies in the knowledge acquired by those involved rather than in the code itself.
To make a (possibly bad & inappropriate) analogy to manufacturing: the plants and factories of digital products is not the pre-existing code, but the brains of those who truly understand how the problem and the market fit together and how the problem is solved. The price/investment required to get to that point is the uncertain process of creating a product and getting it into the hands of customers and users & repeating until they start realising the value of your efforts.

Trying to fix the EU VAT MOSS mess

in

If you run a business that sells digital products or services, you are probably aware that the EU is instituting some rather radical changes to how Value Added Tax works for products and services wholly delivered online. To sum it up: previously, VAT was to be paid in the country where a business was located. If you where a UK business selling to consumers, you would charge UK VAT to consumers in the EU regardless of where in the EU they where located.

This is all about to change.
From January 1st 2015, if your product is an “e-service”, in other words sold and delivered online in an automated way, you need to start charging VAT at the prevailing rate in the country you are delivering to. You also need to store two, preferably three pieces of non-contradictory evidence proving that the service was indeed delivered to the country you claim. If that isn’t enough, your VAT report now has to reflect how much VAT chargeable sales you have delivered to each and every EU country. A final kicker is that businesses that previously had a low turnover below a certain threshold didn’t have to charge or report VAT, now have to do so, from the first pound or Euro of digital sales.

I probably don’t even have to begin explaining just how business/entrepreneur unfriendly these new rules are. Others have written at length about this aspect of the new legislation.

Ibuprofen for your VAT headache

Recursivity is a small software company, we have software products of our own for which this becomes a headache. Luckily we knew about this, understand how tax works and have been somewhat prepared. We wrote a solution for this problem for internal use sometime ago, and as the noise around VAT MOSS (as this initiative is called) became louder, we realised that there might be some mileage to sharing what we’ve built internally to help others.

So that’s exactly what we’ve done, by launching VAT Boss (see the pun in there?) - a platform where you’ll be able to:

  • Calculate the correct VAT rate to apply given location evidence.
  • Validate location evidence, such as ensuring VAT numbers are legitimate when supplying businesses, geo-locating IP addresses, validating phone number locations.
  • Store VAT evidence for reporting.
  • Generate VAT reports for regulatory compliance.

If you want to have access to this, Sign up now, we’ll be opening the doors over the next week or two.

This of course is no panacea, especially not if your company lacks tech savvy people - while we’re looking to make this as easy as possible to integrate into your existing online sales, it will inevitably require exactly that: some integration. Our initial efforts are aimed at having a simple API and simple reporting, which should be available very soon.

Laughing at the EU? Not so fast

If you are outside the EU, you might find this all amusing that the EU does this to its businesses. Not so fast: if you are a business outside the EU (say the US or anywhere else), but you sell to EU consumers, the rules apply to your business too. It might be rational to think “If I don’t comply, what are they going to do about it?”, but at the same time, it might not be a line worth taking if you end up in their crosshairs.

We have no great love of the new VAT MOSS rules. But we’d rather come up with solutions than just complain or ignore the impending problem. VAT Boss is not a solution we expect to make a lot of money off, especially if the EU eventually regains its sanity, but at the same time it is a solution we will use internally, and that we might as well share with the outside world for a fee.

We need Functional Programming BECAUSE a large subset of us are average or below

in

There has been a meme going around for several years in the programming community that “average programmers” don’t get functional programming (often from functional programmers) or that “functional programming is too complex for the average programmer” (often from those unwilling to learn functional programming).

I would contend that functional programming is an absolutely crucial evolution precisely because a large subset of us are average or below average. If you understand the definition of “average” you shouldn’t be offended by me calling a large number of programmers average or below.

The breakdown of imperative programming under growing complexity

Jessitron has written a great post on two models of computation which largely dovetails with my argument, which I will elaborate below.
Imperative programming breaks down under increasing complexity and richness, consider the following imperative example in Java:

public void addBar(Foo foo){
    Bar bar = new Bar("baz");
    foo.setBar(bar);
    dao.update(foo);
    dao.save(bar);
}
  • Will this code work?
  • Does foo need to be updated before bar is saved, or the other way around?
  • Can we trust that any of the methods called only do what the names imply?

The answer to all three questions is: you couldn’t possibly know without holding a complete model of the entire application in your head together with knowledge of the libraries it uses!
Can you imagine the cognitive load this puts on your brain when you have to create a mental model of a piece of codes entire universe to be able to reason about it in a meaningful way?

Some would argue that you should “just know” how your libraries work, what dao does etc. But this is precisely my point: this is unecessary cognitive strain on your brain, brain cycles going into trying to remember intricacies of other code, brain cycles that could more productively put to use to reason about the problem at hand.

Some would argue that you should “be disciplined” and follow a precise set of codified practices to ease the strain. Fair enough, assuming you follow these rules, and everyone else does so as well (which is quite a bold assumption in my experience), you now only have to keep X hundred rules in your head to reason about the code. Still: a very high cognitive load to put on your brain when good tools and languages could offload this.

Functional programming to the rescue!

I am not going to define functional programming in its entirety here (others have done it better), but if we assume the following properties hold true for FP:

  • Immutable data: an assigned value or the values of which it is composed do not change over its lifetime.
  • Referential transparency: The same inputs to a function always return the same result and do nothing else.
  • Higher-order functions: Functions can take functions as arguments, and return functions. Functions are first class values.

Now lets look at the simplest possible piece of code, calculating y by applying the function f to x:

let y = f x

If the 3 properties hold true, we know that for f(x), we will always get the same y and the computation of y is the only thing that occurs - no nuclear missiles are launched.
We also know that we couldn’t say x = f x, because this simply wouldn’t make any sense - x already exists and can’t be re-assigned.

Ok, that’s all pretty obvious, but what’s the point? Well, all of a sudden you achieve the following:

  • You do not have to build a mental model of the whole system and its libraries to reason about it.
  • Corollary, you can reason about any subset of the system, however small, without having to even understand what the whole system does.

The benefits of this hardly have to be enumerated, but I’m going to enumerate a few practical ones anyway:

  • You don’t have to rely on your memory of something you did possibly years ago to deal with new requirements, changes or to track down bugs.
  • For the managers out there: key-person dependencies (“the guy who knows everything about the system”) become less of an issue.
  • New people can become productive much faster, no longer having to spend the time building a mental model of everything.

Dare I say it? For the bean-counters and risk-managers out there, functional programming makes people working on a system more interchangeable as long as they have the requisite knowledge to understand and reason with the general concepts used in a system.

Towards Functional Programming: mostly a question of humility, education & self-education

After slowly immersing myself in functional programming over the last 6-7 years or so, coming across various organisations and a multitude of people in many stages of FP adoption or resistance, I’ve learned that sometimes slow up-take and resistance to functional programming has nothing to do with the usual arguments bandied about. They broadly fall into two categories:

Pride & prestige: people who may have spent decades in their field becoming recognised as “experts” may be reluctant to let go of their expert label, or admit to themselves or others that they may yet again be beginners and have plenty to learn. Vested interest in the status quo is always strong in any field, and the humility it takes to say “I’m a beginner again in this subset of my chosen field” is especially hard to find for those who have put in the greatest investment in the way things are.

Conflating unfamiliarity with complexity: Another common mental trap is to think that which is unfamiliar is not actually unfamiliar at all, but complex or complicated. I think this one is closely related to pride - unwillingness to learn something new and admit to even yourself that you may not be an expert in everything programming related makes it far easier to just declare something unfamiliar as “complex”.

Functional programming is in fact not any harder than imperative programming. As I have explained in this post, functional programming in fact makes things far easier, far less complex, and gives us the tools to better deal with complexity in the large.
In 2014, there is absolutely no reason for you to put the strain on your brain to have to build a mental model of an entire system in your head to do your job as a programmer productively. This is a loosing cause, as eventually the size of human working memory will simply make it impossible for anyone to deal with it all.
Nor is there any reason for us to persist in using an inappropriate level of abstraction loosely based on the physical workings of manipulating silicon memory. Let us move into the 21st century once and for all.

Why incremental delivery is a business concern first, technical a distant second

in

One of the most poorly understood concepts in product/software development is incremental delivery. “Waterfall” organisations certainly don’t get it, but neither do most who claim to be using/doing/being “Agile”, “Kanban”, “Lean” or whatever the flavour and buzzword of the month is.

If you ask Agile proponents, you will usually get answers along the lines of “delivering value incrementally” or “de-risking technical delivery” through smaller batches.
Those who are further along the scale of “getting it” may mention incremental delivery is about building the right thing and discovering along the road what exact set of features actually solves the underlying problem rather than early speculation about what will. This school of thought is far closer to the real benefits of incremental delivery, but they are still not quite hitting the nail on the head.

Incremental delivery, the insufficient bridge building analogy

An analogy that I have sometimes seen to incremental delivery of software is that of bridge building. The story goes something like:

  • If you are building a river crossing as an Agile project, you first take people over a few at a time on a small raft.
  • Eventually you swap the raft for a boat that can carry a few more people.
  • Then maybe you build a simple, but rickety bridge to get even more people over.
  • Eventually you build a massive suspension bridge that will support all the traffic it needs.

All the while you have worked yourself towards having a big bridge that supports all people crossing, you have had other means of getting people across while you have scaled up the amount of people you can get across in one go.
However, this analogy is a bit unsatisfactory, if you know the object is to get people across the river, and you know you need to get a certain amount of people over, why don’t you just bite the bullet and build a sufficient bridge immediately?

Bridge to Nowhere

Bridge to Nowhere

It’s about measuring demand & ensuring the problem is worth solving to begin with!

However, the points that almost no one get, and the main reasons to deliver incrementally are actually:

  1. Measure whether there is actually demand for the problem to be solved in the first place!
  2. Measure whether there is actually demand for the problem to be solved the way your solution solves it!

Those are the only two reasons, nothing else! If you do not get this, to go back to our bridge building analogy, you may end up building Bridges to Nowhere, finding out only after great effort and expense that there was in fact no one who wanted to cross that river in the first place.

Incremental delivery is crucial to prove the business case and value hypothesis for why you are building something in the first place!

But maybe you know exactly what the problem is and how it should be solved?
Well, chances are, even if you are scratching your own itch, that you don’t know. 8 out of 10 new businesses fail.
Whether you are building consumer software for a start-up or doing internal systems integration to be used in the cavernous depth of an enterprise megacorp, software development is new product development. This means the context in which it is build is its “market”, and the product development itself is a “start-up”, even if no one outside will ever know its there.

If you are delivering software, you are delivering a new product of some description. Whether you want it or not, market forces are at work:

  • Is there sufficient demand for the proposed problem to be solved?
  • Is there sufficient demand for the problem to be solved the way you are solving it?
  • Is the demand sufficient to cover and exceed the development cost?

Have you ever come across or heard of an internal initiative or application that was eventually abandoned because no one used it? Market forces. The application or system not being used was evidence that there was no internal demand within the organisation for what it did.

The best way to address these questions is to run your software delivery as a series of experiments, soliciting market feedback to prove or disprove your value hypothesis & business case.

Market risk is the primary concern, decreased delivery risk & incremental value are secondary effects

So lets sum things up: decreasing delivery risk through smaller batches of delivery is still a great benefit of incremental delivery. But it is a secondary concern compared to addressing whether what is being build is worth building.

Delivering value incrementally is a potential benefit of incremental delivery, IF it turns out that you are building something worthwhile. But you are actually only delivering value once you have achieved some level of Product-market fit and you are starting to transition into growth.

Until the point that you have actually proven the worthiness of the problem to be solved, and the solution to solve the problem, you are just dealing with the sunk cost of a series of experiments to try to prove a value hypothesis.

Those who have read this far may already have realised that most software deliveries, even those claiming to do incremental delivery are effectively stabbing in the dark, like a drunken gambler at a casino, they are putting all their money on one throw of the dice, one attempt at proving value, then more often than not wondering what went wrong and where all the money went.



Older posts in our Archive