It wasn′t that long ago when king-sized value packs were in fashion. Led by a trend for consumerism, it sometimes seemed the only way you could buy anything was in large or family size, big was beautiful and economy of scale was the order of the day. This still holds true, just as there will always be those who prefer to buy in bulk, there will always be products that are cheaper and easier to produce in batch quantities.
However, the food industry, for one, has come to realise there are sections of our community who, for whatever reason, do not wish to purchase large quantities of their goods. These are not just people who don′t have the space for storage or the facilities to keep perishables in tip-top condition. More and more members of society are becoming aware of the health issues concerning the over-consumption of food and are turning away from consumption for consumption′s sake. Obesity has become a major threat to life in the western world. High-profile court cases, mainly in the US so far, have forced the food manufacturers and merchants to recognise that compelling, or even enticing, their customers to consume more than they actually need can be detrimental to both the customer and their relationship with him.
There are parallels to this in the software industry. For far too long software suppliers have been producing systems bloated with the fat of over-specified, over-engineered and often unnecessary functionality. Bloat that has to be paid for by someone, usually the customer. The greater the amount of functionality in the system, the greater the likelihood of malfunction and the greater the cost of maintenance. Again, usually paid for by the customer.
Some of this bloat comes from that great myth of reusable code, which causes developers to waste large portions of time and money attempting to foresee the future by architecting extensible and flexible frameworks. Frameworks they hope will allow the component classes to be re-used in forthcoming applications, yet to be defined. The fact that the customer for the current application has to pay the cost of constructing the framework and implementing the hooks into it for its classes doesn′t seem to be a consideration.
Let′s not lay all the blame at the door of the suppliers though. Because the usual fixed-scope contract doesn′t allow the customer to change his mind about the specification without paying exorbitant costs, he must try to get every conceivable feature into the requirement′s document, no matter how meagre the probability of it actually being used. The cost of trying to get it added later far outweighs the cost of building it and later finding it unnecessary.
One approach the food industry has taken to combat this problem is to reduce the quantity of fat in their products and marketing them as ′lite′ versions. Another strategy they′ve adopted is to produce their goods in smaller, bite-size chunks. Large enough to taste and plenty to satisfy modest energy requirements but not so big as to pile on the calories and do you harm. A sufficiency, if you will.
Similar strategies are being used in our industry too. Lite versions of software are not new and have been available from some vendors for some time now and there are other techniques that can reduce the amount of bloat in products. Iterative development is a way of producing software in bite-size chunks. Build the application incrementally, feature-by-feature. As long as you follow the practice of continuous integration, making sure that the software builds after each new feature is added you can be sure that each release is more valuable than the previous. If you did this, you would give your customer the option to stop when he felt the application contained enough value rather than having to accept everything in the original spec, regardless of its applicability or value. He could call a halt when he had his fill, just as it makes sense to stop eating when you′ve consumed sufficient energy for you needs and sated your appetite. Enough is enough but there are some pre-requisites necessary before we can do this.
We have to ditch the practice of creating big up-front designs containing every conceivable feature; we need a mechanism that ensures we have a working version of our software at the end of each iteration; and we need a means of measuring the business value of each feature delivered.
Most of all, we need to stop entering into those so-called ′fixed-price, fixed-scope′ contracts that force us into providing more than the customer needs and forces the customer into accepting it. Fixed is a misnomer, what we really mean is increasing as it′s very rare for features to be removed from the specification once they have made the list. Usually, we just add the ones we didn′t know about or forgot at the beginning. We rarely take the time to re-assess the value of the features that are already on the list and remove those that are now inapplicable; we just append more features and expand the price, dependent on the degree of difficulty of the new feature. Even if we do ask to remove a feature, there is still a price to pay for rewriting the design and specification documents, whether the code has been written or not. If increasing-price, increasing-scope is the case, and I suspect that it is in the majority of cases, why don′t we start by estimating just for the customer′s one most important feature and add an additional amount for each added feature after that. Relating the cost to the complexity, or degree of difficulty, of the task.
I′m guessing the reason customers seem to like entering into these contracts is because it gives them the impression they know what they are going to get for their money and can calculate the return on investment (ROI) for it quite simply. i.e. if a system costs £100,000 but will earn £120,000 in use, then it has an ROI of 20%. Perhaps it gives them the perception of being in control.
This strikes me as a scattergun approach to ROI calculation for software where the total costs are offset against the total benefits, which can′t be right. We know the features of the software are not equal in value, some more than valuable than others and will give greater benefit. We also know that the features differ in their levels of complexity and that some cost more to implement. Given this knowledge, it follows that we also know that some features give a higher ROI. Often features follow the 80/20 rule. That is 20% of the total effort produces 80% of the total value. Surely this information gives us even greater control if we can use it to identify those features that give the highest rate of return?
If we serialise the production and delivery of features in priority order we can start accruing the benefit from those features earlier and make sure we get the highest benefit first. This is called the incremental funding model (IFM) and is discussed in the book "Software By Numbers" by Mark Denne and Jane Cleland-Huang.
To make it simple, let′s think of an imaginary 100-week project that we expect to make us £10,000 per week. Using the 80/20 rule, we could expect it to be delivering £8,000 per week by week twenty and for that figure to increase throughout the rest of the project until it is eventually delivering £10,000 per week by week 100. However, the realisation that the cost of delivering the remaining 20% of business value will cost 80% of the total budget, might well cause us to rethink our strategy. This may be the point where we decide that, for this project, enough is enough. Perhaps our money will get a higher rate of return invested in a new project.
Additionally, the earlier we start to realise a return from our investment, the earlier we can start to make a profit. After week twenty in the above example, the £8,000 per week project is delivering would accumulate to at least £640,000 by the end of the project.
Strangely, this is not the standard model for software development and as we said earlier, fixed-price, fixed-scope is still the norm although it′s a rare project that ends with the same scope and price that was predicted at the beginning. Many of the opponents of incremental delivery dispute that systems can be delivered in this way, stating that it is impossible to break applications down in this way. Their arguments are reminiscent to me of the opponents to Charles Darwin′s theory of evolution who argued that evolution could not possibly work because they could not imagine a midway stage for an eye. "What good is half an eye?" was the catchphrase. Of course we know now that eyes went through many intermediate stages, from a simple patch of light sensitive cells to the complex organ that we possess today.
Imagine if you were hungry and went into a restaurant and weren′t allowed to leave until you′d eaten every item on the menu, with the chef deciding what order it was served in. You wouldn′t find it very pleasant or healthy and like as not, you probably wouldn′t go back. Software should be more like a good restaurant. We should be able to choose what we want in our applications and what order it is delivered in. We should be producing software in bite-sized chunks, not being forced to ′super-size′ and not, as so frequently happens, biting off more than we can chew.
First published in Application Development Advisor