Skip to main content

non-functional code

...is what I immediately think of when I′m asked to work with ′non-functional requirements.′ If that′s what they want, I am only too willing and definitely more than capable of satisfying the request and have proved this many times in the past.

Seriously, though, the way we are presented with these so-called non-functional requirements is amazing, as is the range of them. One of the most common ones is, ″It must be fast!″ Well what does that mean? Programmers are sometimes accused of being pedantic and selfish about what functionality they will let you have from a system but the other side of the coin is the miserly customer, the one who is unable, or refuses, to explain the true extent of the requirement. Fast is a relative term. We need to ask the customer, ″How will we know when it is fast enough?″ In other words, we need to quantify it so we can write a test for it. Tests are how we know that we′ve completed the task and can go on to the next one, or go home. Only when the tests are passing do we know that we are finished and the software is running the way the customer wanted it to. Anything else is merely guesswork. I can picture the scenario now, with the developer repeatedly demonstrating the software to the customer only to be told to go away and make it faster and faster, again and again.

Feasibility is a major consideration too. I once worked on a project where the customer requested (demanded actually) that there be no more than a 35 milliseconds delay between one part of the system and another while under load. Unfortunately, when we timed the original system that we were only building an add-on for, we found the elapsed time was already 40 milliseconds. Getting it down to 35 could probably only have been achieved by rewriting the whole thing from scratch, if at all. Luckily, and after discreetly pointing out to him the incongruity of his wishes, the customer was able to see the funny side of his over-enthusiastic demands and relaxed the lag time to 50 milliseconds, which was easily achievable.

It was easily testable too. We already knew the performance of the existing system, so it was simple to calculate the longest delay our add-on should introduce, i.e. 10 milliseconds. Anything more and we had failed. It′s not difficult to introduce tests to demonstrate timings; there are even profiling applications out there that can do it for you. Malcolm, the manager of this particular project, had his own pet profiling application that he was familiar with and wanted to use it on this project. He felt that since he′d been on a training course for the product, he might as well put his knowledge to good use. His suggestion was that once a week or so, he would; set the software up on a machine, insert timing marks into various bits of the code, run the profiler and then distribute the timing diagrams that it produced around the team.

All well and good but I felt that, at a week, the feedback loop would be a little too long and asked if we could automate this as part of our test harness. Then it would run every time a developer ran the unit test harness prior to integration and we would know that our timing test was failing before we put the code in the codebase rather than some time after. The biggest obstacle to that was the cost of licenses, this particular product would have required each development workstation to have its own license at a cost that we could have probably hired two more developers for a year for.

An alternative, and the one that we decided on, was to insert our own timing marks inside unit tests. It′s easy enough to do using standard C++ system timing calls and I′m fairly sure they′re available in most other languages too. Get the current time to the nearest millisecond, call the functionality under test and then calculate the elapsed time. We knew what the boundary time was so anything under that was a pass and over was a failure. Because we were practising continuous integration with hourly automated builds, we would know almost immediately if we had transgressed this particular constraint. There was no need for everyone to have a license for the profiler and no need for anyone to use it unless the test failed.

″Well now,″ I hear you say, ″That′s all very well and good for testing whether or not you′re complying with the customer′s requests but doesn′t good practice dictate that you design for speed in the first place?″

This, I think, is a hangover from the early days of programming when processing time was expensive and programming time was cheap. Things are different now but there still seems to be an awful lot of people out there who haven′t noticed this shift in paradigm. We all know processor speeds have been doubling every few years for decades, surely I′m not the only one whose wages haven′t been doing the same? Still, at every shop I go to, there is at least one hard-liner that insists of poring over every line of code he produces, seeking to wring the very last microsecond of performance out of it. Probably dreaming of winning the Obfuscated C award (is that still going?). I′m sorry but in today′s era of optimising compilers, this counts as gold-plating. I′m not saying there are no occasions when speed is important, I′m just saying these are very few and they are becoming fewer and fewer as processors get faster and more powerful. I′m also greatly aware that compilers are usually much, much better than humans at code optimisation.

So my answer is not to try to code for speed but code for quality, which to me means simplicity. Concentrate on getting the code working and refactored first. By working I mean passing all the tests you and the customer can think of throwing at it and refactored so that it has the simplest design possible, with no redundancies or duplication and still passes the test. When and only when, it fails the timing test, you can run it through a profiler and let the profiler tell you where the bottlenecks and hotspots are. The profiler will make a much better job of identifying the areas that need improvement than a human possibly ever can. Starting from a simple design will make it so much easier for you to redesign and having tests will make sure that it′s still functioning correctly. Make it work, make it right, make it fast!

It does make me wonder though, when these shifts in reality are not perceived, or are ignored, sometimes even denied, by members of our own community. We′re supposed to be at the leading, if not bleeding, edge of technology and yet so often try to stick to practices that have long been made redundant by changes in technology.

I remember giving a talk on the quality over speed subject to a development team at a very well-known insurance company last year. I was just at the bit where I recommend coding for quality rather than speed when the development manager jumped out of her seat and interrupted my talk, calling for her developers to ignore every word I′d said and, in future, ″Design all of your code to be fast.″

So speed, the ′make it fast′ syndrome is the most common non-functional requirement that comes my way. The next most popular (with the customer, not me) one is scaleability, usually manifested as; ″Make it scaleable.″

This one really throws the developers sometimes because they suffer from the ′3 numbers′ problem. That is the only numbers they recognise are; zero, one and infinity. So when asked to make it scaleable, they know they won′t be dealing with zero transactions or users, it′s not one either, so it must be an infinite number of users or transactions that they need to be able to handle. They will spend weeks, if not months, trying to figure out how to make the system handle, maybe not an infinite number of users, but at least the number of human beings alive today. More gold-plating!

Plainly this is very rarely the case and the solution is similar to that of the speed issue. Give them an appropriate number so they can write a test that involves exercising the system in a test situation with that number of users, or shows it shares its load properly over that many machines in a cluster. Given the tests, we will always know when we are complying and when we′re not. With scaleability my approach is make it work for one instance first.

There are many more of these so-called non-functional requirements and yet all of them are about functionality. They may not individually relate to single features but all of them are about how the application functions and all of them can be tested.

One I really hate and that can′t be tested automatically is ″Must be easy to use.″ You can guarantee that the customer asking for this one is the guy that insisted on having a feature list longer than War and Peace, with multiple configurations for each. Ease of use, or usability as it′s popularly known, shouldn′t really fall within the programmer′s remit, it′s something they are notoriously bad at anyway. This is something best tackled by Interaction Designers before the programmers start coding. In reality, it gets tagged on at the very end when the customer suddenly realises the user interface is too complicated as a result of all those finicky little widgets he absolutely insisted on having.

As my colleague, the French developer and Author, Laurent Bossavit, says, ″The distinction between functional and non- functional requirements is useless. All types of requirements stem from a misfit, a difference between a perceived current reality and a desired future reality. Articulate the difference; if its effects can be tested, then you can make it a requirement. In many, many cases, the test also allows for total or partial automation.″

The most important thing is to get the information at the appropriate point in the project lifecycle. It makes things much easier if the customer tells you, ″by the way it has to support 15,000 users a day across 4 continents″ on day one, than if he tells you the week before release is due. This is where experience comes in, knowing what questions to ask the customer, if it′s a client app, is it stand alone, or does it share data with other users, if it′s a server app, how many concurrent users, etc.

Last but not least, I swear that the next time a customer requests that his application, ″Should be pretty,″ I will fetch my ugly stick and batter him with it, or maybe send it to Trinny and Suzannah for a makeover.

First published in Application Development Advisor

Popular posts from this blog

The Death Knoll for the Agile Trainer

The winds of change blow fiercely, propelled by AI-driven virtual trainers, and I can't imagine for a minute that certification organisations have not already recognised the potential for a revolution in training. They may even already be preparing to embrace technology to reshape the Agile learning experience. Traditional face-to-face training and training organisations are on the verge of becoming obsolete as virtual tutors take the lead in guiding aspiring Agile practitioners through immersive digital experiences. The future of training and coaching lies in AI-driven virtual trainers and coaches. Trainers, powered by artificial intelligence engines such as ChatGPT, are set to revolutionise the learning experience. With AI-powered virtual trainers, learners can engage in immersive virtual environments, actively participate in simulations, collaborate with virtual team members, and tackle real-world scenarios. These trainers automatically analyse progress, provide instant feedback

Embracing AI - Augmented Intelligence

There is no denying that artificial intelligence (AI) has made significant strides over recent years, becoming more advanced and capable than ever before. With this progress, many have begun to wonder whether AI poses a threat to humanity, particularly our jobs, privacy, security, and overall well-being.  Some may argue that the rapid advancement of AI could lead to a dystopian world where machines rule supreme and humans become obsolete. However, it is important to remember that at its core, AI exists to serve us, not replace us. Instead of viewing AI as competition for human intelligence, we should consider it as an augmentation of our abilities.  This idea of 'Augmented Intelligence,' instead of Artificial Intelligence, highlights how powerful technology can enhance rather than impede human potential. Augmented Intelligence recognizes that humans and machines each possess their unique strengths, making them better together than apart. Humans excel in creativity, intuition, a

The Business Value of Telemetry

Dynamic technologies and infrastructure allow server failures and network issues to be quickly addressed, easily mitigated and, in many cases, reliably predicted. As a result, there’s a new venue opening for IT: end-user telemetry, which enables IT to determine how its internal users are consuming business resources, what type of application issues they are experiencing and how it impacts business performance. Gartner suggests that infrastructure and operations (I&O) leaders must change their approach and prioritize top-down business-oriented metrics. The research firm predicts that “60% of IT monitoring investments will include a focus on business-relevant metrics” by 2021, up from just 20% this year. Changing The Game Of course, it’s one thing to recognize the value of business-driven metrics and another to implement effective monitoring processes company-wide to overcome key barriers to effective digital transformation. The first step is understanding the fundamental shift requi