Skip to main content

bad science

My business is change. I like to think we help people to change for the better and improve by introducing tools and techniques we think are good practice. To do this, we rely on our own research but we also encourage our customers to do their own research, too, and for the most part what they find backs up our suggestions for improvement. Imagine my surprise, then, when a customer recently sent me a link to some 'research' from a company billing itself as a 'Research Lab.' To give it even more credence, the company trumpeted its very own 'Chief Scientist' complete with PhD.

The 'research' was pretty standard stuff, they'd done some static analysis on a bunch of systems from large companies and had found out pretty much what many of us already know - most computer systems aren't written very well and there is an awful lot of technical debt around. There are many reasons for this, which, again, many of us are aware of, so I won't bore you by evangalising our favoured solutions here and the main reason for publication of the 'research' anyway, was marketing of the company's products and services in this area.

However, there was one phrase in the findings that had caught the customer's  eye, "Scores for robustness, security, and changeability declined as the number of releases grew, with the trend most pronounced for security." In other words, the more releases you do, the more likely you are to have issues with security, robustness and changeability.

The reason this caught his eye is that we had been helping this customer with a transition to a model of Agile Software Development, of which some of the key principles are: "Early and frequent release of valuable software" and "Deliver working software frequently." We had told him that releasing as often as possible would lead to more satisfied customers and better quality software. Now he wanted to know how we could reconcile this with 'research' done by a 'Chief Scientist' from a 'Research Lab' that said the more releases a product has, the more likely it is to have security, robustness and changeability issues.

He had a very good point, moving to a continuous integration and delivery model is one of the first steps we recommend to all our customers but we also like to think we are scientists and that our methods are based on solid foundations. Should we continue with a continuous integration and delivery strategy when the 'research' tells us otherwise?

I'd like to report that we spent a lot of time an effort on this ourselves but it really only took us a few minutes to figure out what the real problem here was. Anyone who's undertaken any field of scientific study will know the first thing, and probably hardest, thing to learn is critical analysis. In other words, how to understand what the data is telling us. The first lesson in critical analysis is understanding that just because there is a relationship between two things doesn't necessarily mean the relationship is causal and even when it is a causal relationship, it is very important to identify which item is the cause and which item is the effect.

For example, every so often in the medical sector some bright young scientist will release research purporting to demonstrate that skinny people are more likely to die young than fat people are. Often this research is pounced upon by the more sensationalist newspapers and published under banner headlines exhorting us to abandon our diets.

The research in question is usually based on the measurement of weight and age at death for non-accidental mortalities and the researcher invariably finds the average weight of the fatalities to be less than the average weight of the population, concluding, therefore, that the less you weigh, the more likely you are to die.

What the researcher fails to account for:

  • People die young mostly because of illness.
  • Many of the illnesses that cause you to die young are wasting illnesses that cause massive and rapid weight loss before death.
  • Many people spend a long time in hospital before they die and are subject to dietary controls.



The truth is - it’s not being slim that makes you die, it’s dying that makes you slim.

Most medical professionals are aware of this phenomenon and simply ignore it as do all responsible medical journals. At the same time, many practitioners ready themselves for the barrage of questions shortly to be directed their way from concerned patients that have read the 'research.' After all, if it's published 'research' it must be true, mustn't it?

Let's go back to the 'research' that told us frequent releases lead to security, robustness and changeability issues and see what the researchers might have missed...

  • Anyone who's worked in a regulated industry would agree that security and robustness issues are mission-critical and need to be fixed and released ASAP. You don’t wait for the next scheduled release to deploy the fixes for them, you do a patch release. In other words, security and robustness issues force you to have more releases. Ergo, a system with security or robustness issues is very likely to have more releases.
  • Code that is difficult to change suffers from a similar problem in that software is difficult to change when it is difficult to understand.  If we understood it, it would be easy to change. When it's difficult to understand, it's also difficult to know when it's been fixed. You may think you've fixed it and do a release but you haven't. You may have to do several releases before it’s finally put to bed.Therefore, code that is difficult to change is likely to have more releases.
  • Code that has no issues doesn't have patch releases.



The truth here is - it's not frequent releases that cause code issues, it's code issues that cause frequent releases.

Even funnier to some people but actually quite tragic for our industry - a piece of research lamenting the poor analysis and design skills of software developers blighted by poor analysis skills itself. Most experienced professionals and journals will recognise this 'research' for what it is but, again,  the practitioners need to ready themselves for the barrage of questions from concerned customers. After all, if it's published 'research' it must be true, mustn't it?

Popular posts from this blog

The Limitations of Reward in Organisational Development

The ongoing inquiry into the Post Office Scandal[1] has recently brought to light disturbing evidence. It appears that Post Office investigators were incentivised with monetary bonuses for successful prosecutions and the confiscation of funds from sub-postmasters and sub-postmistresses affected by the faulty Horizon software[2].  As discussed in Dan Pink's book on motivation, "Drive[3]," the repercussions of offering extrinsic rewards without adequate safeguards can be severe. People might exploit the system, ignoring long-term consequences for the sake of immediate gains. The effectiveness of rewards, such as bonuses and salary increases, in motivating and disciplining individuals has been a subject of ongoing debate. While some argue that extrinsic incentives can drive desired behaviors and outcomes, there is a growing body of evidence suggesting otherwise. Let's explore the limitations of relying on rewards in these contexts, emphasising the significance of intrins...

Embracing AI - Augmented Intelligence

There is no denying that artificial intelligence (AI) has made significant strides over recent years, becoming more advanced and capable than ever before. With this progress, many have begun to wonder whether AI poses a threat to humanity, particularly our jobs, privacy, security, and overall well-being.  Some may argue that the rapid advancement of AI could lead to a dystopian world where machines rule supreme and humans become obsolete. However, it is important to remember that at its core, AI exists to serve us, not replace us. Instead of viewing AI as competition for human intelligence, we should consider it as an augmentation of our abilities.  This idea of 'Augmented Intelligence,' instead of Artificial Intelligence, highlights how powerful technology can enhance rather than impede human potential. Augmented Intelligence recognizes that humans and machines each possess their unique strengths, making them better together than apart. Humans excel in creativity, intuition, ...

Embracing the Promise of AI: Overcoming Fears and Musk's Paradox

In the face of groundbreaking technologies like AI, initial fears and uncertainties are not uncommon. However, history has shown that society often transitions from apprehension to wholehearted acceptance as the true potential of a technology unfolds.  When motor vehicles emerged in the late 19th century, society grappled with fear and uncertainty. Laws mandating a person carrying a red flag to precede each vehicle reflected public anxiety and attempts to mitigate potential accidents.  Similarly, society's current apprehension towards AI stems from fear of the unknown and its potential disruptive consequences. However, history shows that initial fears are often unfounded and subside with increased familiarity and understanding of new technologies. AI's capability to process vast amounts of data and identify complex patterns presents unprecedented opportunities for decision-making and efficiency. Organizations can unlock insights, make data-driven decisions, and optimize proces...