Most teams kickoff their new iteration with a planning session. – where they discuss one story at a time, writing & estimating all of the tasks needed to implement it.
Task cards need to be written along with development task cards and estimated realistically.
When writing programming task cards, make sure that coding task estimates include time for writing unit tests and for all necessary testing by programmers.
Testers should help make sure that all necessary cards are written and they have reasonable estimates.
Your job as a tester is to make sure enough time is allocated to testing and to remind the team that testing & quality are the responsibility of the whole team. When the team decides how many stories they can deliver in an iteration, the question isn’t “How much coding can we finish?” but “How much coding and testing can we complete?”
Commit Conservatively – It is always better to bring in another story later than to drop a picked story.
Working closely with customers or customer proxies is one of the most important activities as an agile tester. Good communication usually takes work.
We want “big-picture” tests to help the programmers get started in the right direction on a story. High level tests should convey the main purpose behind the story.
Don’t forget to ask the programmers what they think you might have missed. What are the high-risk areas of the code? Where do they think testing should be focused?
When Testability is an issue, make it the team’s problem to solve.
One beneficial side-effect of reviewing the tests with programmers is the cross-learning that happens.
High level test cases along with executable tests you’ll write during the iteration will form the core of the application’s documentation.
People unfamiliar with agile development often have the misconception that there’s no documentation. In fact, agile projects produce usable documentation that contains executable tests and thus, is always up to date.
Testers in agile must be proactive. Instead of waiting for work to come to them, they get up and go look for ways to contribute.
Working on stories in advance of the iteration may be useful for teams that are split across different geographic locations. By working ahead, there’s time to get information to everyone and give them a chance to give their input.
If we make our iteration planning go faster and reduce the risk of the stories we’re going to undertake, it’s worth doing some research and brainstorming before we start the iteration.
The Pre-Planning Meeting
Go Over stories for the next iteration
The Product owner explains the purpose of each story – business conditions of satisfaction.
Team brainstorms about potential risks and dependencies, asks questions and figures out the simplest path.
Pull in customers to answer questions, get a better idea.
Experiment with short Pre-Iteration discussions and Test-Writing sessions
Invest preparation time when it’s appropriate. There is a risk to ‘working ahead’.
To go Fast – We need to Slow Down First!
Teams that are distributed in multiple locations may do their iteration planning by conference call, online meeting or teleconference. ( And Cut to 2020 – Coronian Times – Every one of us is doing that!! )
One practice that Lisa’s team used was to assign each team a subset of the upcoming stories and have them write task cards in advance.
(I, too, have used this practice – only the Task Cards were in fact story Sub-tasks being created in JIRA for our user story items created by the PO)
If the customers aren’t readily available to answer questions and make decisions, other domain experts who are accessible at all times should be empowered to guide the team by determining priorities and expressing desired system behavior with examples.
(I have experienced that – our Product Owners essentially did this job for us)
Examples are an effective way to learn about and illustrate desired functionality. Using Examples, you can write high level tests to flesh out the story a bit more.
Mock-ups are essential for stories involving UI or a report. Ask your customers to draw up their ideas about how the page should look.
Before the next iteration – triage the outstanding issues with the customer. Those deemed necessary should be scheduled into the next iteration.
Testers bring a different viewpoint to planning and estimation meetings. They need to be a part of the story sizing process.
The team needs to develop in small, testable chunks in order to help decide what stories are tentatively planned for which iteration. They keyword being ‘testable’.
If there are stories that present a big testing challenge, it might be good to do those early on.
Release Planning is the time to start asking for examples and use cases of how the features will be used, and what value they’ll provide. Drawing flowcharts or sample calculations on white board can help pinpoint the core functionality.
The agile tester thinks about how each story might affect the system as a whole or other systems that ours has to work with.
In agile development, Test Plan must be concise and lightweight., assessing testing issues, including risk analysis and identifying assumptions. The biggest benefit of test planning is the Planning itself.
This chapter shows examples of lightweight agile Test Plans created by Lisa and Janet that are very useful! Here is my take on creating a simplistic agile test plan using a mind-map-
The chapter discusses about Task Boards and how they can be leveraged. Here is my take on using task boards by agile teams that I wrote a few months back –
Agile metrics are key to measuring the team’s progress. Plan for what metrics you want to capture for the life of the release, think about what problem you are trying to sove and capture only those metrics that are meaningful for your team.
Here is something I wrote about useful and not-so-useful Agile metrics-
Don’t get caught up with committing to your plans- the situation is bound to change. Instead, prepare for doing the right activities and getting the right resources in time to meet the customer’s priorities!
Use the Agile Test Quadrants to help you identify the different types of test automation tools you might need for each project, even each iteration.
Test Automation Pyramid (introduced by Mike Cohn)
Lowest Layer- Bulk of automated unit , technology facing tests. Quickest feedback, code much more quickly using xUnit family of tools
Middle layer – Automated business-facing tests that help the team. “Are we building the right thing” Tests operate at the API, behind the GUI level. Bypass the presentation layer – less expensive to write & maintain these tests. Fit & FitNesse are good examples, written in domain language
Top Tier – Should be the smallest automation effort as they have the lowest ROI. Done through GUI, operating on the presentation layer. More expensive to write, more brittle and need more maintenance.
Any tedious or repetitive task involved in developing software is a candidate for automation.
AN automated deployment process is imperative – getting automated build emails listing every change made is a big help to testers. It speeds up testing & reduces errors.
A fast running continuous integration and build process gives the greatest ROI of any automation effort.
Another useful area for automation is data creation or setup. Cleaning up test data is as important as generating it. You data creation toolkit should include ways to tear down the test data so it doesn’t affect a different test or prevent rerunning the same test.
What we shouldn’t automate
Tests that will never fail
Plan-in plenty of time for evaluating tools, setting up build processes, and experimenting with different test approaches in the initial iterations.
If management is reluctant to give the team time to implement automation, explain the trade-offs clearly. Work towards a compromise.
We will always have deadlines, and we always feel pressed for time. There is never enough time to go back and fix things. During your next planning meeting, budget time to make meaningful progress on your automation efforts.
Good test management ensures that tests can provide effective documentation of the system and of the development progress
“Why we Want to Automate Tests and What holds us back”
Test automation is a core agile practice. Agile projects depend on automation.
Manual tests take too long and are error prone.
Automation regression tests provide a safety net. They give feedback early & often.
Automated builds, deployment, version control and monitoring also go a long way toward mitigating risk and making your development process more consistent.
“Build Once, Deploy to Many” – is a tester’s dream!
Projects succeed when good people are free to do their best work. Automating tests appropriately makes that happen!
If we become complacent about our testing challenges and depend solely on automated tests to find our issues, and then just fix them enough for the test to pass – we do ourselves a disservice.
However, if we use the tests to identify problem areas and fix them the right way or refactor as needed, then we are using the safety net of automation in the right way.
When tests that illustrate examples of desired behavior are Automated, they become ‘living’ documentation of how the system actually works.
Barriers to Automate –
Programmer’s attitude – Why automate at all
The Hump of Pain – the initial learning curve
Fear – Non-programming testers fear they have nothing to contribute
Old habits, team culture
Without automated regression tests, manual regression testing will continue to grow in scope and eventually may simply be ignored.
Teams with automated tests and build processes have a more stable velocity.
Good test design practices produce simple, well-designed, continually refactored, maintainable tests.
Team culture & history may make it harder for programmers to prioritize automation of business-facing tests than coding new features. Using Agile principles & values helps the whole team overcome barriers to test automation.
This chapter reviews all the four Agile Testing Quadrants by illustrating an example of a team’s success story in testing their whole system using a variety of home-grown and open source tools.
The system is related to Monitoring of Remote Oil and Gas Production Wells. The software application had a huge legacy system, with very few unit tests. The team was slowly rebuilding the application using new technology. And describes how they used tests from all four quadrants to support them.
Using Test Driven Development and Pair Programming wholeheartedly. Also adding unit tests and refactoring all legacy code they encountered on the way.
The product engineer writing acceptance tests and sharing with the developers and testers before they began creating.
Automation involving functional test structure, web services and embedded testing
Exploratory testing to supplement the automated tests to find critical issues.
Don’t forget to Document… but only what is useful
Finding ways to keep customers involved in all types of testing, even if they are remote. Have UATs and end to end tests
Use lessons learnt during testing to Critique the product in order to drive the development in next iterations
“Critiquing the Product Using Technology-Facing Tests”
Technology-facing tests that critique the product are more concerned with the non-functional aspects – deficiencies of the product from a technical point of view.
We describe requirements using a programming domain vocabulary. This is the main of Quadrant-4 of our Agile Testing Quadrants.
Customers simply assume that software will be designed to properly accommodate the potential load, at a reasonable rate of performance. It doesn’t always occur to them to verbalize those concerns.
Tools, whether home-grown or acquired, are essential to succeed with Quadrant 4 testing efforts.
“Many teams find that a good technical tester or toolsmith can take on many of these tasks.”
Take a second look at the skills that your team already posseses, and brainstorm about the types of “ility” testing that can be done with the resources you already have. If you need outside teams, plan for that in your release and iteration planning.
The information these (Quadrant-4) tests provide may result in new stories and tasks in areas such as changing the architecture for better scalability or implementing a system-wide security solution. Be sure to complete the feedback loop from tests that critique the product to tests that drive changes that will improve the non-functional aspects of the product.
When Do you Do it?
Technical stories can be written to address specific requirements.
Consider a separate row on your story board for tasks needed by the product as a whole.
Find a way to test them early in the project
Prioritize stories such that a steel thread or a thin slice is complete early, so that you can create a performance test that can be run and continued as you add more functionality.
The time to think about your non-functional tests is during release or theme planning.
The team should consider various types of “ility” testing including – Security, maintainability, Interoperability, Compatibility, Reliability and Installability – and should execute them at appropriate times.
Performance, Scalability, Stress and Load tests should be done from the beginning of the project.
Critiquing or evaluating the product is what business users or tester do when they assess and make judgement about the product.
These are the tests performed in Quadrant 3 of our Agile Testing Quadrants
It is difficult to automate Business facing tests that critique the product, because such testing relies on human intellect, experience, and insight.
You won’t have time to do any Quadrant 3 tests if you haven’t automated tests in Quadrants 1 and 2.
Evaluating or critiquing the product is about manipulating the system and trying to recreate the actual experience of end users.
Show customers what you are developing early & often.
End-of-iteration demos are important to see what has been delivered and revise priorities
Rather than just waiting for end of sprint demos, use any opportunity to demonstrate changes as you go.
Choose a frequency of demos that works for your team. Informal demos can be more productive
Scenario Testing – Business users can help define plausible scenarios & workflows that can mimic end user behavior
Soap Opera Testing – Term coined by Hans Buwalsa (2003) can help the team understand business & user needs. Ask “What’s the worst thing that can happen, and how did it happen?”
As an investigative tool, it is a critical supplement to the story tests and our automated regression suite.
Sophisticated, thoughtful approach to testing without a script, combining learning, test design and test execution
There are 2 types of usability testing. The first is done up front by user experience folks, using tools such as wire frames to drive programming. These are part of Quadrant 2.
The second type talks about the kind of usability testing that critiques the product. We use tools such as User Personas and our Intuition to help us look at the product with the end user in mind.
Instead of just thinking about testing interfaces, we can also look at APIs and consider attacking the problem in other ways and consider tools like simulators & emulators.
User manuals & online help need validation just as much as software. Your team may employ specialists like technical writers who create & verify documentation. The entire team is responsible for the quality of documentation.
“Toolkit for Business-Facing Tests that Support the Team”
As agile development has gained in popularity, we have more and more tools to help us capture and use them to write executable tests.
Your strategy for selecting the tools you need should be based on your team’s skill set, the technology your application uses, your team’s automation priorities, time, and budget constraints. Your strategy should NOT be based on the latest and coolest tool in the market.
In agile, simple solutions are usually best.
Some tools that can help us illustrate desired behavior with examples, brainstorm potential implementations and ripple effects and create requirements we can turn into tests are—
Software based tools
A picture is worth a thousand words, even in agile teams. Mock-ups show the customer’s desires more clearly than a narrative possibly could. They provide a good focal point to discussing the desired code behavior.
Visuals such as flow diagrams and mind maps are good ways to describe an overview of a story’s implementation, especially if it is created by a group of customers, programmers, and testers.
Tools such as Fit (Framework for Integrated Tests) and FitNesse were designed to facilitate collaboration and communication between the customer and development teams.
Finding the right electronic tools is particularly vital for distributed teams (chat, screensharing, video conferencing, calling, task boards etc.)
Selenium, Watir and WebTest are some examples of many open source tools available for GUI testing.
Home-Brewed Test Automation Tools
Bret Pettichord (2004) coined the term ‘home-brewed’ for tools agile teams create to meet their own unique testing needs. This allows even more customisation than an open source tool. They provide a way for non technical customer team members to write tests that are actually executable by the automated tool. Home-brewed tools are tailored to their needs, designed to minimize the total cost of ownership and often built on top of existing open source tools.
The best tools in the world won’t help if you don’t use them wisely. Test tools might make it very easy to specify tests, but whether you are specifying the right tests at the right time is up to you.
Writing detailed test cases that communicate desired behavior is both art and science.
Whenever a test fails in Continuous Integration (CI) and build process, the team’s highest priority should be to get the build passing again. Everyone should stop what they are doing and make sure that the build goes ‘green’ again. Determine if a bug has been introduced, or if the test simply needs to be updated to accommodate intentionally changed behavior. Fix the problem, check it in, and make sure all tests pass!
Experiment – so that you can find the right level of detail and the right test design for each story.
Keep your tests current and maintainable through refactoring.
Not all code is testable using automation but work with programmers to find alternative solutions to your problems.
Manual test scenarios can also drive programming if you share them with the programmers early. The earlier you turn them into automated tests, the faster you will realise the benefit.
Start with a simple approach, see how it works, and build on it. The important thing is to get going writing business-facing tests to support the team as you develop your product.
The Agile Testing Quadrants matrix helps testers ensure that they have considered all of the different types of tests that are needed in order to deliver value.
Unit tests verify functionality of a small subset of the system. Component tests verify the behaviour of a larger part such as a group of classes that provide some services. Unit & Component tests are automated and written in the same programming language as the application. They enable programmers to measure what Kent Beck has called the internal quality of their code.
Tests in Quadrant-2 support the work of the development team but at a higher level. These business-facing tests define external quality and the features that the customers want. They’re written in a way business experts can easily understand using the business domain language.
The quick feedback provided by Quadrant 1 and 2 automated tests, which run with every code change or addition, form the foundation of an agile team. These tests first guide the development of functionality and when automated, then provide a safety net to prevent refactoring and the introduction of new code from causing unexpected results.
“Appraising a software product involves both art and science.”
Quadrant-3 classifies the business-facing tests that exercise the working software to see if it doesn’t quite meet expectations or won’t stand up to the competition. They try to emulate the way a real user would work the application. This is manual testing that only a human can do…use our senses, our brains and our intuition to check whether the development team has delivered the business value required by the customer.
Exploratory testing is central to this quadrant.
Technology-facing tests in Quadrant-4 are intended to critique product characteristics such as performance, robustness and security.
Creating and running these tests might require the use of specialised tools and additional expertise.
Automation is mandatory for some efforts such as load and performance testing.
Ward Cunningham coined the term “technical debt” in 1992, but we’ve certainly experienced it throughout our careers in software development.
By taking the time and applying resources and practices to keep technical debt to a minimum, a team will have time and resources to cover the testing needed to ensure a quality product. Applying agile principles to do a good job of each type of testing at each level will, in turn, minimize technical debt.
Each quadrant in the agile testing matrix plays a role in keeping technical debt to a manageable level.
The Agile Testing Quadrants provide a checklist to make sure you have covered all your testing bases. Examine the answers to questions such as:
Are we using unit & component tests to help find the right design for our application?
Do we have an automated build process?
Do our business-facing tests help us deliver a product that matches customer expectations?
Are we capturing the right examples of desired system behaviour?
Do we show prototypes of the UIs and reports to the users before we start coding them?
Do we budget enough time for exploratory testing?
Do we consider technological requirements like performance and security early enough?