Skip to main content

Posts

agile testing

I was at a customer site not so long ago giving a course on Agile Software Development and part of the course was an introduction to test-driven development. (TDD) TDD is a process whereby the requirements are specified as a set of tests and the developers use the number of tests passing, or failing, to measure the amount of progress in the system. In the middle of one of my talks, the head of testing rose from his seat and asked; "So you′re saying that we should let the developers know what the tests are before they even start coding?" After I replied in the affirmative he responded with, "That would be cheating! If we did that, the developers would then only write code to pass the tests!" That particular manager′s opinion is one I′ve found to be reasonably common among testers and it′s one I′ve always found difficult to understand. There seems to be a general rule in some organisations that once the requirements have been captured, there should be no communication

to future developers...

We′ve all seen that scene in the movie where the family all gather round to hear the reading of the will. As the solicitor reads each name aloud, the expression on the corresponding face is a signal as to how welcome the bequest is to the recipient A smile, a gasp of pleasant surprise or, maybe, a frown of disappointment. A legacy is something that is left or given by someone who is now at a distance, making communication with the donor difficult. It is also an ′aide-memoire′, something to remember them by. Recently though, the term ′legacy code′ has come to mean code which is difficult to work on, partly because it can be difficult to communicate with the originator of the code but mostly because it is difficult to communicate with the code itself. This strikes me as strange because that is the very essence of the programmer′s task. We are not ′problem-solvers′, as some would have it. Calculating the V.A.T on an invoice or working out how to draw graphics on a screen are not ′problems

The Customer Paradox

We all have customers. Some of them are real live customers, people we deliver a tangible product to in return for financial reward and others are merely the next link in our production chain. Testers, for example, are customers to developers. The one thing all customers have in common is they are happiest when they are given exactly what they want. When I think back to the times I′ve been a customer, be it in a shop or maybe a restaurant, I remember how annoyed I was when given something I didn′t ask for and didn′t want. Even more annoying though, were the times when what I was given wasn′t what I wanted and it was what I asked for. This brings me to the customer paradox, which can be stated as, "the harder you try to define your customer′s requirements, the less likely you are to deliver what he wants." This is especially true about software development and, in my opinion, it′s largely to do with the way we approach project management. Yes, you′ve guessed it, I′m having a d

i do declare

Years ago, when I was a development manager one of my biggest problems was that nobody ever seemed to be able to tell me exactly where we were in the project! If I asked the developers, they would only ever tell me they were either 20% done or 80% done, even right up to the week before the work was due. Even when they were finished, there was no way of knowing the quality of the product until after it had gone through Quality Assurance. The other big problem was the Managing Director (we didn′t have CEOs, CIOs and CTOs in those days) hijacking developers to work on his own personal projects. Promises of a bonus combined with a warning not to tell anyone else occasionally left me bemused as to why things were taking so long. In those days I used traditional PM techniques and a well-known brand of project management software but even then couldn′t tell whether work was on track on a day-to-day basis. I would usually only find out work would be delayed on or around the due date, when a de

automatic coders

″We have separate analysis, design, coding and testing departments and most projects follow the corresponding phases in a more or less standard waterfall pattern″, said Jim, the operations manager. ″It mostly works well but we do tend to find a lot of defects at the testing stage and because of tight schedules we often have to ship before the defects are fixed, supplying patches to the customers later.″ It was no surprise to hear that the company was rapidly losing its reputation and its customer base. The sales and marketing department were constantly complaining of abuse and ridicule from the customers during contract negotiations. In response, as I often do, I drew a little picture of a project timeline, from project inception to product delivery and asked him, ″Where on this line would you least like to discover your defects?″ Occasionally, a respondent may claim that he never likes to find defects anywhere but the overwhelming majority say, ″Just before product delivery″ as did Ji

a testing time for all

The problem is not the defects themselves, although, admittedly, they are a problem. The problem is recognising where the defects come from and what causes them, I′ve found that opinions on this subject can differ quite widely. For example, I once worked for a boss whose argument was that defects can only ever come from a programmer. For him there were no two ways about it, only programmers wrote code and so only programmers could create defects, in his mind defects were a problem for programmers and programmers only. He believed the one and only cause of defects was bad workmanship on the part of programmers and even proposed a league table so we could identify (and remove) the worst offenders. The funny thing is, this same company had a very strict recruitment policy, putting candidates through a long interview process and quite extensive technical tests so they only took on the very highest calibre of developers. I would be surprised if more than one in a hundred applicants actually