Why agile development races ahead of traditional testing
Traditional testing practices optimise large, centralised testing but struggle to support the rapid delivery of agile development.
Traditional testing practices optimise large, centralised testing but struggle to support the rapid delivery of agile development.
As your developers shift to agile practices, they will invariably perform more testing themselves. So where does that leave your quality assurance (QA) professionals?
They need to adapt to changing circumstances by getting deeply involved in the daily operations of the development team.
Advanced practices such as test-driven development, increased testing automation and continuous build and integration make a significant impact on the day-to-day activities of developers and testers.
These shifts in testing practices also change how development teams select testing tools. Developers want tools that easily plug into their integrated development environments (IDEs), while QA and other software professionals prefer tools that offer a higher level of abstraction and are easy to use. So how are agile teams revamping their testing tool strategies in the development context?
More articles on agile software development
Improving development teams with agility
How early integration testing enables Agile development
Traditional testing practices were designed to optimise the operation of large, centralised testing groups using a testing centre of excellence (TCOE) model. But the shared-services approach breaks down at agile organisations because it can not support the rapid delivery rates of agile development teams.
For example, the US Department of the Treasury’s Financial Management Service had to completely isolate a major agile project from the regular IT department’s governance processes, including the TCOE. The team employed a testing and development process using a behaviour-driven development (BDD) approach with great success.
The development team selected and used Cucumber, an open-source testing tool, because it supported the new approach better than traditional testing tools. The result was a higher degree of automation and speed in testing than would have been possible had the team been forced to comply with TCOE governance and processes.
When testing teams are separated from development, it is typical for testers to try to find as many bugs as possible – but only after the developers have written the code. Developers, while responsible for fixing bugs, only see the results of poor attention to quality in retrospect, when the consequences of their actions are harder and more costly to fix.
The net result is that TCOEs keep costs low through labour outsourcing and less overall activity, but also by shifting costs back upstream into the development cycle through higher levels of scrap and rework. Tools designed to help testers document bugs and help developers reproduce and fix those bugs are useful, but do little to reduce the systemic issues that result in high scrap and rework costs.
Frontloaded test management vs rapidly changing priorities
When a company centralises its test execution activities, testing schedules can not keep up with the rapid course corrections that characterise agile
When testing teams are separated from development, it is typical for testers to try to find as many bugs as possible – but only after the developers have written the code
development teams. When user stories change, agile teams often reprioritise them in the backlog and do not pay much attention to developing the types of formal, detailed requirements that traditional TCOEs use as inputs to develop test cases. Traditional frontloaded test management (TM) and planning processes are not designed to support rapidly changing priorities and the short cycles required by the agile method. This problem is compounded when TM tools do not link into the agile project management requirements backlog.
Segregating testers from developers makes it hard to integrate their work into a continuous delivery pipeline. Fast moving teams do not build code and then hand it off to a testing organisation; they build code, deploy the application, execute it and immediately observe the results. This is especially true for development teams that employ multivariate testing, which is common in web application and mobile development. These teams make a change, deploy it to a subset of servers, compare the results from each execution branch and then decide if the change is successful. Teams that employ blue/green deployment (or red/black if you’re Netflix) replace system testing and user acceptance testing environments with multiple production environments, and they are always expanding one environment while bringing another down.
Why traditional testing lags behind agile
What prevents traditional testing keeping up with agile teams?
- Large volumes of manual test activities slow down delivery. Manual testing is the oldest and still most common approach to testing software in testing centres of excellence (TCOEs). Test professionals develop test cases that cover as much functionality as possible, which exacerbates the problem. There is no way around the fact that manual testing is time-consuming and resource-intensive. Even throwing a phalanx of manual testers at the problem does not work; manual testing simply can not keep up with daily builds, continuous integration, and the functional and non-functional testing cadence of agile delivery teams. The following points illustrate how traditional testing affects the lifecycle of the IT development project:
- Teams put off testing until the end of projects, squeezing it in the process. Another contradiction frequently found in traditional testing approaches is that teams only start testing once they have developed and integrated the system, partly due to the expense and time required for manual testing. Unfortunately, projects often fall behind schedule, so teams compress and sacrifice the activities left at the end. As a result, testing time gets sacrificed to make up for delays in other processes, compromising quality.
- Late-breaking defects can derail projects. The longer a defect sits in code unfixed, the longer it will take a developer to fix it, with dire consequences for project deadlines. Developers move on to new code and new problems and lose the context of the features they have already delivered (can you remember what you ate for lunch last Monday?). When they have to come back to a defect they wrote weeks or months ago, it takes time to re-engage with the context of the code.
- Teams build up too much technical debt. One sure-fire killer of on-time delivery is finding out late in the development cycle that your application has major quality problems. Late discovery of defects lead to high rates of rework and waste. It is even worse if the quality issues are systemic – such as architecture design issues – or if someone discovers that basic user functionality is missing. The earlier testing starts – especially system testing and user acceptance testing – the earlier systemic risks will surface.
This is an extract of the Forrester report: Navigating The Agile Testing Tool Landscape. Diego Lo Giudice is a principal analyst at Forrester Research