Skip to main content

i do declare

Years ago, when I was a development manager one of my biggest problems was that nobody ever seemed to be able to tell me exactly where we were in the project! If I asked the developers, they would only ever tell me they were either 20% done or 80% done, even right up to the week before the work was due. Even when they were finished, there was no way of knowing the quality of the product until after it had gone through Quality Assurance.

The other big problem was the Managing Director (we didn′t have CEOs, CIOs and CTOs in those days) hijacking developers to work on his own personal projects. Promises of a bonus combined with a warning not to tell anyone else occasionally left me bemused as to why things were taking so long.

In those days I used traditional PM techniques and a well-known brand of project management software but even then couldn′t tell whether work was on track on a day-to-day basis. I would usually only find out work would be delayed on or around the due date, when a developer would finally admit he wouldn′t be finished in time. Conversations with other development managers confirm I′m not the only one who′s suffered from these problems.

So I was quite heartened recently when I saw a preview of a conference paper given by two young developers from ThreeQ Solutions in Dublin, Ray Gallagher and Sean O′Donnell. The paper was called "Begrudge Every Keystroke" and detailed how they′d automated almost every aspect of their development process.

Their presentation really brought home to me the massive changes in development departments in the last few years. Probably the greatest change is in the speed, quantity and most of all, quality of feedback. Feedback is everywhere nowadays and because management has slowly come to realise it′s actually a good thing, we′re constantly enjoined to produce more.

The popularity of iterative development has meant that feedback cycles are becoming shorter and shorter. We no longer have to wait twelve months to see if our code will integrate successfully with the rest of the department′s or if the changes we′ve made have broken anybody else′s code. The most important period in a defect′s lifecycle is the twenty-four hours after it is first injected into the system and now we have tools that will tell us about defects within that period.

The first set of feedback they get is from the unit test harness. As the programmers code, they write unit tests and continually test their code against the tests that they write. In this way they can see almost immediately if they′ve broken any of their own code as they progress.

When they′ve finished their particular piece of functionality, they integrate it into the codebase. The codebase is continuously monitored by an open-source tool that builds the system whenever the source changes. When the build is complete, we run the unit test harness containing all the tests written by the team so far and then run the code through the acceptance test harness.

All of this gives us more feedback that our code is good, we haven′t broken anybody else′s and the product still conforms to the customer′s requirements. Since the tests are run as soon as the code changes, the period between defect injection and discovery is limited to the length of time it takes to build the system, less than an hour in most cases.

The results of the build and the tests are displayed on a monitor placed in a prominent position in the work area. The whole screen shows red or green, depending on the result of the last set of tests, or yellow when it′s in the process of building. If the screen goes red, everybody stops work until the problem is resolved.

After he′s finished integrating his code, the developer takes the job card (they call them story cards) and pins it in the ′work completed′ section of a big corkboard in the development area, before taking the next available one from the ′work to do′ section. Anyone walking through the office can immediately see what work is completed and what work there is still to do, just by looking at the corkboard.

Corkboards, whiteboards and monitors used in this way are what Alistair Cockburn calls information radiators and need to be placed where anyone walking past can see them. Used like this there is less need for people to ask questions, as the information is right in front of them as they go by. The bigger they are the better, as it takes less effort to view them. The only other rules for information radiators are; they must not take much effort to update and they must be updated continuously.

At any time, a visitor to the offices can see the state of the product. The same information placed in files on a server somewhere, even when easily accessible by a browser, takes much more effort to access and so is likely to be accessed less often.

This is also known as the visible workplace, a principle borrowed from Lean Manufacturing and used in Lean Software Development. The use of story cards is also similar to the Kanban concept that underlies just-in-time production in Lean Manufacturing.

An interesting facet of Ray and Sean′s presentation was that they′d also replaced story cards with their own homegrown electronic equivalent. Like the rest of the department′s information, this too was prominently displayed on a monitor that tracked not only which cards were left to do but also a burndown chart of their progress through both the current iteration and the project as a whole.

Of course, although they have developed some of this themselves, most of the utilities are available as open-source offerings. CruiseControl manages the continuous integration monitoring and they use Ant to manage the builds. Junit is used for unit testing and a combination of FIT and Exactor is used for acceptance testing. A quick Google will find them if they′re not on SourceForge.

The problem, as always, lies in plugging all these tools together quickly and easily. They could, of course, have simply written a program to weld everything together, using the language they develop their applications in for their customers. The overhead for that would have been pretty high, however, so for expediency, they used what they called declarative scripting in Python to glue the utilities together.

Python is an interpreted object-oriented programming language that is remarkably powerful and has a very clear syntax. Like most fully developed languages it has modules, classes and exceptions. You can write your own modules for it in C or C++, interface it with other libraries and use it interactively too. It is free and you don′t need a licence to distribute it with your application.

They call it declarative scripting because they only ever use language statements in their scripts. Other language constructs, particularly conditionals, are not used. If a statement succeeds, the script continues. If it fails for any reason, the script halts and returns with an error message. Using the language this way makes the logic of their scripts easy to understand and maintain, vital for these situations.

It only takes seconds for a developer to hit the build button in the IDE, so why do they go to the bother of automating a process that takes hardly any time anyway? Well, in their organisation they recognise the difference between value-added time and non value-added time.

Value-added time is time spent on activities that actually produce value for the customer, value is what they are paying you to produce. In software development only requirements gathering, coding and producing user manuals count as value-added. Non value-added items include; quality assurance, gathering metrics, rework due to defects and even compliance related activities. Interestingly, actually compiling and building the software adds no value to the product either.

Having the process automated to such an extent does two things. It frees the developers from mundane chores such as monitoring builds and testing and allows them to spend more time adding value to the product. Hence the title of their presentation. It also reduces the amount of time it takes each feature to go through the system, a figure the lean people call cycle time.

The quicker the cycle time the more efficient the process. Work in progress in the system, part-done work if you will, is inventory or stock. Although there may still be some old fashioned accountants out there that believe stock is an asset, most now consider it to be a liability, especially if the stock is as volatile, as software can be.

Cycle efficiency is another Lean Software Development measurement they use, value-added time/total lead time. A process can only be considered lean if it′s cycle efficiency is more than 25%. At least one quarter of all the time spent must be spent adding value to the product. It sounds a remarkably small amount but try working out how much of your process is spent adding value to your products and how much is spent doing other things, or even just waiting.

Half-dozen or so open-source offerings coupled with some simplistic scripting, all of which are free, can furnish so much improvement in the workplace. Visible indicators of progress and automatic defect detectors beat any other PM tools or practices I′ve come across so far.

References:

  • Lean Software Development: Poppendieck M, T. Addison-Wesley, 2003
  • Agile Software Development: Cockburn, A. Addison-Wesley, 2002
  • Python: http://www.python.org/


First published in Application Development Advisor

Popular posts from this blog

The Death Knoll for the Agile Trainer

The winds of change blow fiercely, propelled by AI-driven virtual trainers, and I can't imagine for a minute that certification organisations have not already recognised the potential for a revolution in training. They may even already be preparing to embrace technology to reshape the Agile learning experience. Traditional face-to-face training and training organisations are on the verge of becoming obsolete as virtual tutors take the lead in guiding aspiring Agile practitioners through immersive digital experiences. The future of training and coaching lies in AI-driven virtual trainers and coaches. Trainers, powered by artificial intelligence engines such as ChatGPT, are set to revolutionise the learning experience. With AI-powered virtual trainers, learners can engage in immersive virtual environments, actively participate in simulations, collaborate with virtual team members, and tackle real-world scenarios. These trainers automatically analyse progress, provide instant feedback

Embracing AI - Augmented Intelligence

There is no denying that artificial intelligence (AI) has made significant strides over recent years, becoming more advanced and capable than ever before. With this progress, many have begun to wonder whether AI poses a threat to humanity, particularly our jobs, privacy, security, and overall well-being.  Some may argue that the rapid advancement of AI could lead to a dystopian world where machines rule supreme and humans become obsolete. However, it is important to remember that at its core, AI exists to serve us, not replace us. Instead of viewing AI as competition for human intelligence, we should consider it as an augmentation of our abilities.  This idea of 'Augmented Intelligence,' instead of Artificial Intelligence, highlights how powerful technology can enhance rather than impede human potential. Augmented Intelligence recognizes that humans and machines each possess their unique strengths, making them better together than apart. Humans excel in creativity, intuition, a

Integrating UI/UX Design Into Your Sprints

Integrating UI/UX design work into the Sprint and aligning it with your Scrum process can be challenging but not impossible. Here’s a few suggestions on how a Scrum Master can handle this situation : 1. Encourage close collaboration between the UI/UX designers, developers, and QA team members. Create an environment where they can work together and understand each other's perspectives. Encourage them to pair and/or mob to help bridge the gap between design and development. 2. Educate the team about the value of UI/UX design: Help the developers and QA team members understand the importance of good design and how it impacts the overall user experience. This will help them appreciate the design work and its role in creating a successful product. 3. Include design-related tasks in the Sprint: While design work may not be easily quantifiable in the same way as development tasks, you can still include design-related tasks in the Sprint backlog. These tasks could include activities