Skip to main content

agile testing

I was at a customer site not so long ago giving a course on Agile Software Development and part of the course was an introduction to test-driven development. (TDD) TDD is a process whereby the requirements are specified as a set of tests and the developers use the number of tests passing, or failing, to measure the amount of progress in the system. In the middle of one of my talks, the head of testing rose from his seat and asked; "So you′re saying that we should let the developers know what the tests are before they even start coding?" After I replied in the affirmative he responded with, "That would be cheating! If we did that, the developers would then only write code to pass the tests!"

That particular manager′s opinion is one I′ve found to be reasonably common among testers and it′s one I′ve always found difficult to understand. There seems to be a general rule in some organisations that once the requirements have been captured, there should be no communication between developers and testers until the day the code is finished and ready for testing. On that day the code is signed off by development and handed over to testing only for it to be rejected and returned because of the number of defects in it. Defects the developers weren′t even aware were defects in a lot of cases. It′s been said that, in many projects, this is where design and coding really start. This is where the developers finally discover what the application is meant to do and, just as importantly, meant to not do.

This is often the point in the project lifecycle where the blame and recrimination wars begin too. The developers insist their interpretation of the requirements is the correct one but the testers completely disagree and so the system fails the tests but each side refuses to admit being in the wrong. Is it any wonder that in many companies, there is no love lost between the two factions?

How does this occur? Both sides have almost completely different views of what the system should do but were both subject to the same set of requirements. A set of requirements that were captured and documented in a manner that was specifically intended to enable them to be understood by everybody and prevent any equivocation or ambiguity in them.

The problem is partly the ambiguity of language. Although we have expressions like "plain English", the English language is far from plain, and I′m fairly certain this is true for every other language on the planet too. Languages and the rules governing their usage are complex. The meanings of words often change depending on the context in which they are used. Sometimes the context is explicitly communicated along with the words, other times it is tacit and the speaker expects it to be inferred by the listener. The speaker may use body language or give emotional clues to give the listener additional contextual information.

In his book, User Stories Applied, Mike Cohn uses ′buffalo′ as an example of a word that can have many meanings. It is, as he says, a bison-like animal but dictionary.com also defines it as a verb with another two meanings; to bully, intimidate; or to deceive, confuse or bewilder. In addition, Buffalo is also a city in the state of New York, so a valid sentence using these meanings could be "Buffalo buffalo buffalo and buffalo Buffalo buffalo". My grammar checker doesn′t like that at all and complains that the word buffalo is repeated too many times. However, it doesn′t know English as well as we do and so isn′t able to figure out that this is, indeed, a perfectly legitimate statement meaning; "Bison from a city in New York state intimidate and confuse other bison from the same city." We are able to understand it because we are aware of the context surrounding it.

An interesting and somewhat humorous example, if somewhat contrived, but it demonstrates how even a perfectly spelt, punctuated and grammatically correct sentence can be impenetrable without context. Certainly impenetrable to my grammar checker and probably most humans too.

We also see another phenomenon in effect here. When faced with information that is incomplete, we have a tendency to fill the gaps with assumptions based on our own past experiences. We then process the information and use the conclusions for our next set of actions, which may include gathering further incomplete information, filling in the gaps and performing more processing. On and on we continue and with each step we climb further up the ′ladder of inference′. Because the experiences of each human being are unique no two people will climb the ladder in the same way and so each will reach different conclusions. The more incomplete the original information is and the more gaps that are filled with personal assumptions, the more we become convinced that our, and only our, conclusions are the correct ones. Fortunately, we share a lot of culture and experiences with our colleagues, so we make similar assumptions and when we climb the ladder our conclusions shouldn′t be too different to theirs. Developers and testers though, are not always immediate colleagues. Often they belong to separate departments with separate offices and sometimes even separate buildings. The physical distance between them and the competition between the two factions will make their views of the world even more disparate.

The third part of the problem is that the requirements document is an artefact that forms the basis of a contract. If we′re working to a fixed-scope, fixed-price contract it is this very document that defines the extent of the scope. According to Barry Boehm′s famous exponential cost-of-change curve, the cost of changes to the specification, increases by a factor of ten each time the project moves through a stage in the development cycle. At the very beginning of any project longer than say a month, it is extremely unlikely, if not impossible, for the customer to know what will be required at the end of the project. If the customer or the analyst get any of the requirements wrong or omits them in the requirements gathering phase, there will be a heavy cost to pay for adding or changing them later. Given this set of circumstances, the optimal strategy for the person preparing the document is to couch the requirements in as vague terms possible. The use of ambiguity gives us the chance to argue the precise detail later when we have more knowledge about the system.

Three problems that lead to failures near the end of the project. Just the place where the cost of change curve says failures are the most expensive to fix and just as we′d planned to hand the project over to the customer. In fact failure often occurs at the very last place we want, or can afford to fail but the causes of failure are inherent in the methods we use to plan and implement our projects. In effect, we actually plan to fail when we are at our most vulnerable!

Earlier in this article, we proposed that the testing phase is often when the developers really start to find out what the project is really meant to do. If that is the case, would it not make more sense to start the testing phase at the beginning of the project? This may sound strange and counter-intuitive to a lot people, how can we test something that doesn′t yet exist, but should make perfect sense to anyone with management training. They will know that quality cannot be inspected into a product after production, it can only be built in. The most important time for any defect is the twenty-four hours after it is created. If the defect is caught within that twenty-four hours the cost of fixing it is negligible compared with the cost of fixing it later after more code has been written on top of it. This can only happen if both the tests and testers are available to the developers from the very start of the project.

Testing from the beginning of the project and continually testing throughout the project lifecycle is the basis of agile testing. If we can work with the customer to help him specify his requirements in terms of tests it makes them completely unambiguous, the tests either pass or they don′t. If our coders only write code to pass tests, we can be sure of one hundred percent test coverage. Most of all, if we keep our testers, developers and customers (or customer representatives) in constant face-to-face communication with each other, we can eradicate most of the errors caused by us climbing the ladder of inference. Breaking our projects into smaller chunks of work and iterating them will give us frequent feedback on the current state of the project.

There are many teams now using agile testing techniques to improve the quality of their products and having great success. There is some investment in training required and changes to the workspace are necessary to allow customers, testers, and developers to work side-by-side but these are a small price to pay for the advantages gained.

The most difficult thing for most teams is shifting the perception of the test team competing with the developers where their focus is detecting faults and preventing poor quality products from being released. The new, agile testing, paradigm is the test team collaborating with the developers to build quality in from the start and release robust products that deliver the best possible business value for the customer.

References
� User Stories Applied, Cohn, M. Addison-Wesley Professional, 2004
� Software Engineering Economics, Boehm, B, Prentice Hall, 1982


First published in the BCS SIGIST Journal, The Tester

Popular posts from this blog

The Death Knoll for the Agile Trainer

The winds of change blow fiercely, propelled by AI-driven virtual trainers, and I can't imagine for a minute that certification organisations have not already recognised the potential for a revolution in training. They may even already be preparing to embrace technology to reshape the Agile learning experience. Traditional face-to-face training and training organisations are on the verge of becoming obsolete as virtual tutors take the lead in guiding aspiring Agile practitioners through immersive digital experiences. The future of training and coaching lies in AI-driven virtual trainers and coaches. Trainers, powered by artificial intelligence engines such as ChatGPT, are set to revolutionise the learning experience. With AI-powered virtual trainers, learners can engage in immersive virtual environments, actively participate in simulations, collaborate with virtual team members, and tackle real-world scenarios. These trainers automatically analyse progress, provide instant feedback

Embracing AI - Augmented Intelligence

There is no denying that artificial intelligence (AI) has made significant strides over recent years, becoming more advanced and capable than ever before. With this progress, many have begun to wonder whether AI poses a threat to humanity, particularly our jobs, privacy, security, and overall well-being.  Some may argue that the rapid advancement of AI could lead to a dystopian world where machines rule supreme and humans become obsolete. However, it is important to remember that at its core, AI exists to serve us, not replace us. Instead of viewing AI as competition for human intelligence, we should consider it as an augmentation of our abilities.  This idea of 'Augmented Intelligence,' instead of Artificial Intelligence, highlights how powerful technology can enhance rather than impede human potential. Augmented Intelligence recognizes that humans and machines each possess their unique strengths, making them better together than apart. Humans excel in creativity, intuition, a

The Business Value of Telemetry

Dynamic technologies and infrastructure allow server failures and network issues to be quickly addressed, easily mitigated and, in many cases, reliably predicted. As a result, there’s a new venue opening for IT: end-user telemetry, which enables IT to determine how its internal users are consuming business resources, what type of application issues they are experiencing and how it impacts business performance. Gartner suggests that infrastructure and operations (I&O) leaders must change their approach and prioritize top-down business-oriented metrics. The research firm predicts that “60% of IT monitoring investments will include a focus on business-relevant metrics” by 2021, up from just 20% this year. Changing The Game Of course, it’s one thing to recognize the value of business-driven metrics and another to implement effective monitoring processes company-wide to overcome key barriers to effective digital transformation. The first step is understanding the fundamental shift requi