Successful Test Automation Approach in Agile
Engineers in charge of Test Automation, further referred to as TA, face different challenges in their projects, depending on team size, geographic locations of teams, complexity of project, technologies, methodology/lifecycle used etc. In this whitepaper, we focus on understanding and dealing with TA challenges that are common in Agile environment.
The fundamental difference between Agile and traditional Waterfall is:
- Agile encourages early discovery of issues. This is often cited as “Fail Fast,” and it takes some time for the teams to get used to this symbiotic relationship between Agile software development and failure.
- Waterfall considers testing a mass effort near the end of the project. However, what happens at the end of testing, (release pass or fail) is quite uncertain. Today, more than ever before, uncertainty is a bad thing for investors. This is one of the reasons companies opt for Agile approach.
Even without going into a more detailed discussion about the differences between the two approaches, one can conclude that testing and TA in Agile is executed in a very different way from the traditional Waterfall. Functional, integration and performance testing are taking place simultaneously with development. On top of this, there are other activities typical for most Agile flavors, like sprint planning, demos, retrospective etc. Then, there is a continuously changing [business] environment and development process where only essentials are documented. Considering all these factors, it is easy to understand how much test engineers need to learn and understand to be successful in testing, let alone in TA.
Agile encourages and welcomes change, and to keep that change under control (i.e. to prevent bugs from leaking to production), a team has to invest in TA and, more broadly, in Continuous Integration to shorten the feedback cycles [time from a commit to the test results] and to avoid having repetitive tasks executed by engineers.
All TA efforts, in general, have different cost/benefit ratios. Some things make more sense to be automated than others, because automation is neither a trivial nor cheap process:
- First, it makes sense to automate functionality that is critical for business and that is most used.
- It is important to cover integration points between various components and systems because they may change continuously in a project.
- Tests with huge input data, where results are usually statistically analyzed, are also good candidates for automation.
- Repetitive procedures and error-prone tasks in Continuous Integration including application deployment, loading data and configuring the system can be automated.
- It is also useful in areas that have been assigned specific service level agreements (SLAs) and cannot be verified without automation, etc.
Now that we have identified some of the constraints inherent to the development process and typical areas where automation provides best cost/benefit, we can discuss specific TA challenges. We are going through the 5 important aspects of successful test automation approach in Agile, namely:
- allocating time for testing and TA
- understanding what it means that Test Script is Done in Regression Testing context
- selecting the tool(s) for TA
- understanding what is going on with the environments
- making it easier to test
One of the typical traps of Agile, when it comes to testing, is that a team develops a feature in, say, Sprint 1 and tests it in Sprint 2. Even worse, if fixes are delivered in Sprint 3, this means that the team is not delivering a potential shippable increment (PSI) at the end of the sprint, but by the end of 3 sprints.
Agile recommends that during sprint planning, teams select features that can be developed and tested (with critical fixes verified) during a single sprint, which means that teams need to allocate enough time for testing and TA. One can observe that many Agile teams say “quality is a team-wide responsibility,” but only few of them realize that quality is delivered only when TA and test artifacts have the same importance as the code.
Therefore, time must be explicitly allocated for testing and automation begins as early as possible (ideally, teams go into Test Driven Development (TDD) mode) and should not lag behind the development. Acceptance criteria and retrospective sessions should give enough attention to TA. Moreover, testers should allocate enough time for exploratory testing that is performed manually. Without proper resource allocation, tests will lag behind the code development and this creates a huge risk for the business.
Understanding what it means that Test Script is Done in Regression Testing context
TA enables effective regression testing, i.e. repeating [almost] the same suite of tests against successive builds to give feedback to the team about how are they progressing and how stable the code is. There is a balancing effort between results comparability and changing the tests. On one end, test results are comparable only if tests are not changed, but on the other end, tests must change continuously. Some new checks are added, some ‘old’ checks are changed because underlying business requirements changed, etc.
In Regression Testing, it is important to understand what it means that test script is Done. In simplest terms, test script is Done when it runs out of the testers’ hands: automatically, independently, correctly, without a single click and away from the tester’s machine. These scripts are usually part of the regression testing framework that not only executes the tests, but prepares the environment, performs teardown and reports results. Therefore, test scripts are Done in Regression Testing context only when they are out of testers’ hands and when the results they produce are used in decision making.
In addition, there are some test script quality attributes that require careful attention:
- Correctness: If a test script reports 90% success rate, is it really the 90% expected? Test the test scripts by simulating failover scenarios, invalid deployments and configurations etc.
- Integrity: Ideally, environment after the script has completed should be in a state that allows other scripts to execute and produce valid results, as well as the same test script to execute repeatedly. Designing the script in a 3 stage way: Setup, Test and Teardown, helps understand this issue
- Maintainability: How much effort does it take to adjust test scripts to the changed requirements, interfaces, data structure, accounts, platforms etc.?
- Versioning: Scripts go in parallel with the code and should be versioned together with the code. Many teams simply deliver test scripts with every build.
- Portability: Can test scripts easily switch between different environments? Environment specific parameters should be decoupled from the scripts.
- Performance: The time it takes for a script to complete is critical in Continuous Integration. Coverage of the script should not be simply slashed just to have the results arrive quickly: test performance needs to be analyzed and tuned because new scripts come in constantly and only few of the old ones get retired.
Next time, when evaluating whether a test script is really Done, consider all of the attributes mentioned above.
3. Selecting the tool(s) for TA
Selecting the tools for TA in an Agile environment involves evaluating different criteria than in the case of a traditional software development processes. Given the infrastructure diversity in today’s business, it is easy to understand why testing tools need to be able to work on various platforms (Linux, Windows etc). With non-GUI testing, for things like web services and databases, it is desirable to have tools with flexible command line support because command line is a perfect gateway to automation, especially on Linux/Unix environments.
Controlling costs is always an important issue for a software company. If a free tool can satisfy testing needs, anyone in the Agile team can use the same tool and share the same script without additional costs. Contrary to that, when a company needs to invest considerable money in tools licensing, it will probably end up in a situation where not everyone in the team has access to the testing tool, and in Agile that is a considerable limitation.
However, as a project matures, engineers may realize the limitations of free testing tools and see opportunities to enhance the TA and Regression Testing process if they could just change a tiny thing in the tool. This is where open source tools are useful. Like no other industry, the software industry has access to incredible, free open source tools that are relatively easy to customize (just check the license agreement). What if the project is using the new technologies that are still in beta? There are no test tools for them, but adapting existing open source tools may be the right answer.
Finally, there is no use in testing unless there is an easy way to access the test results. Teams will need to use testing tools and the CI framework to enable reporting and alerting, so that is another important feature to consider when choosing the TA tool.
4. Understanding what is going on with the environments
Except in small-size projects, software engineers working in development and testing will often use and share multiple environments. These environments range from simple Virtual Machines (multiple VMs can coexist on the same hardware, potentially saving some money) to complex cloud environments that depend on multiple external services and are geographically distributed in several data centers. Either way, knowing exactly what is happening with the environment is a crucial time saver for Agile teams.
While testing, the first things engineers see are the symptoms – only then, are the root causes searched for. Several problems, or their combinations, may result in software behavior that is different than what was expected: data issues, environment configuration, services availability, the way exceptions are handled, software bugs, hardware failures, network problems, insufficient disk space, etc. Note that actual software bugs are only one part of the story and teams that cannot distinguish between a bug and other problems will face false alarms quite often. After 10 false alarms, the team may not react properly on the 11th, which could be a critical bug.
Therefore, monitoring all aspects of the environment allows Agile teams to understand why software is behaving a certain way. No monitoring means a lot of time wasted and a lot of false alarms. So how does monitoring relate to TA? First, environment checks are something that has to be automated (sometimes using the same tools as in the actual testing). Second, it is important to build the monitoring system in order to let the engineers work on TA instead of wasting too much time trying to understand what is happening with the test environment.
5. Making it easier to test
Lean teaches us to eliminate anything that does not create value for the end customer. So how about the test scripts and complex Continuous Integration framework and infrastructure? Are they considered waste in Lean? They should not, because they directly contribute to delivering what customer needs: quality software [updates] on time.
There are more and more projects where development invests time into building specific configurations, interfaces, hooks etc. to unlock the real power of TA. Here are several examples to consider:
- Many projects benefit from changing how the logs are tracked, i.e. enabling live log display within the application or returning the log within the web service response. Also, the format of the logs is important for tools that enable [distributed] log indexing and analysis.
- GUI projects benefit from creating testing hooks for specific application parts or controls. This enables TA that is more resilient to design changes on the UI level.
- Sometimes, teams decide to alter their API and develop new methods specifically for testing. For example, the customer does not ask for delete functionality, but it is very important for the testing, so the team exposes this additional feature specifically for testing purposes.
- Analytics is another example of how aggregate data can aid testing: logging detailed processing workflow, performance of every step, inputs and outputs, makes it easier for discovering bugs and repeating problematic scenarios.
Whatever strategy is chosen for making the testing easier, the team has to be aware that altering APIs and providing details about the processing may also be a security breach. Therefore, these hooks and additions need to be either hidden and protected or simply stripped from the builds that are promoted to production environments.
Conclusion
As more and more teams dive into Agile, they realize how much Test Automation (TA) approach affects the success of the project. Successful TA approach together with Continuous Integration unlocks the potential of the team to work with a huge volume of changes in a controlled manner. Traditional “developer to tester” ratios like 3:1 or 2:1 simply do not work in Agile environment. Every team member can and should contribute to building a successful TA approach, but this can happen only after the team realizes that test scripts are as valuable to the development process as the code itself.
Different aspects discussed in this whitepaper enable the team to evaluate and benchmark its processes and practices. They all have one thing in common and that is following the Agile principle--Fail Fast--and working with the failures, not running away from them.
Stay Connected: