Getting test automation done is a challenge, especially within the tight deadlines imposed by Scrum. As much as the thought of continuous in-sprint test automation sounds enticing, the practicality of it may elude most Scrum teams.
In my article published here– I look at some of the main things you need to consider in order to get your test automation done within the confines of your sprint.
The first thing to focus on is a framework that is useful, is easy to understand, and helps all stakeholders participate in test automation.
This is essential because you want to make test automation a continuous activity that is a part of daily work, not a once-a-sprint (or once-a-release) work item. For this to happen, the framework must make it equally comfortable for a businessperson, developer, functional tester or automation expert to add their contribution and see the results of their efforts.
There are many business-friendly frameworks and techniques, like behavior-driven development (BDD), as well as many tools that can create tests in a domain language and then translate them to script code.
All stakeholders must be trained on using the framework, and their area of contribution must be made clear to them, with practical hand-holding. The automation tester can then focus on maintaining the framework, generating test suites and editing failing scripts, while the creation of test automation will be a continuous task assigned to everyone involved.
The next thing to focus on is collaboration between the various stakeholders. A continuous automation framework can only survive when it is being fed and tended to by everyone on the team.
The business people, like a business analyst or a product owner, can help by adding user scenarios or defining the requirements in a framework-friendly format. This may require them to be trained on the preferred format based on the framework being used
The developers can help by creating reusable methods for steps of the script. They can also create and maintain an object repository for all elements they add to the UI while testers use the pseudo names of the elements in the test scripts. This means that the scripts can be created before (and independent of) the application UI, and such scripts won’t need editing when the UI changes, as long as the object repository is kept up to date
The testers can help by adding more scenarios, specifying and creating test data, and executing the scripts periodically
How to strategize the development of test scripts is crucial to making in-sprint automation a reality. Using API-level automation whenever possible will reduce the time and effort.
Test automation is imperative for the fast-paced agile projects of today. Testers need to continuously plan, design and execute automated tests to ensure the quality of the software. But the most important task is to decide what to automate first.
In my article published at the Gurock Blog website, I have have compiled a list of questions to help you prioritize what you should automate next and guide your test automation strategy.
Think of this like a checklist that helps you make automation decisions quickly and effectively and create a standard process around them for your team to follow. Here are the list of questions to ask yourself.
Do you need to run the test with multiple datasets or paths?
Is it a Regression or Smoke Test?
Does this automation lie within the feasibility of your chosen test automation tool?
Is the area of your app that this is testing prone to change?
Is it a Random Negative Test?
Can these tests be executed in parallel, or only in sequential order?
Are you doing it only for the reports?
Test automation tools will provide you with useful insights into the quality of the software that you can showcase with the use of some insightful reports. But are these reports the only reason you are looking at automation? Just looking at the red or green status results of the test reports might not be the best way to assess the software quality. You will need to spend time analyzing the tests that failed, why they failed, and what needs to be corrected. Tests created once will need maintenance and continuous monitoring to keep them up to date. All of that needs to be kept in mind and the effort needs to be accounted for.
There is more to test automation than just the fancy reports!
Looking at the questions above, analyse the state of your test case, the intent behind its automation, and its feasibility, as well as the value that you might get out of it. Hope that helps you decide what tests you should or should not be picking for automation!
Agile team delivers working software at the end of the iteration – demonstrate to the customers and get their feedback.
Having testers conduct the ’Iteration Review’ is a common practice as they’ve usually worked on all the stories. The Scrum Master, programmers or testers could demonstrate the new features – It is recommended to rotate this honor.
Retrospectives are an excellent place to start identifying what and how you can do better.
Start, Stop, Continue technique – Discussing What went well, What did not go well and what we can start doing to help.
Write task cards for actions to be undertaken to implement the steps
At the end of the next iteration, take a checkpoint to see if you improved
Retrospectives are a simple and highly effective way for teams to identify & address issues. The retrospective meeting is a perfect opportunity to raise testing-related issues. Bring up issues in an objective, non-blaming way.
Make sure your team takes at least a little time to pat itself on the back and recognise its achievements.
Even Small Successes deserve a Reward.
Many agile teams have trouble taking time to celebrate success.
Have a weekly fun gathering or team games.
For big milestones such a big release or achieving a test coverage goal, the whole company can have a party to celebrate, bringing in catered food or go out.
It is also important to celebrate individual successes. A ‘Shout-Out Shoebox’ – is a great idea to recognize the value different team members contribute.
Taking time to celebrate successes lets your team take a step back, get a fresh perspective, and renew its energy so it can keep improving your product, giving team members a chance to appreciate each other’s contributions. Don’t fall into a routine where everyone has their head down working all the time!
Take advantage of the opportunity after each iteration to identify testing- related obstacles, and think of ways to overcome them.
The beginning of coding is a good time to start writing detailed tests.
As testers think of new scenarios to validate with executable tests, they also think about potential scenarios for manual exploratory testing. Make a note of these for later pursuit.
Some quick risk analysis can help you decide what testing to do first and where to focus your efforts.
The Power of Three Rule – When unexpected problems arise, you may need to pull in more people or even the entire team. Tester, Developer and Customer (or businesspeople) can together decide on correct behavior and solutions.
As soon as testable chunks of code are available, and the automated tests that guided their coding pass, take time to explore the functionality more deeply. Try different scenarios and learn more about the code’s behavior. You should have task cards for tests that critique the product both business and technology-facing. The story is not ‘done’ until all of these test types are done.
If your exploratory tests lead the team to realise that significant functionality was not covered by the stories, write new stories for future iterations. Keep a tight reign on “Scope Creep” or your team won’t have time to deliver the value you originally planned.
Technology-facing tests that critique the product are often done best during coding. This is the time to know if the design doesn’t scale or if there are security holes.
Leaving bugs festering n the code base has a negative effect on code quality, system intuitiveness, system flexibility, team morale and velocity.
Strive for “zero tolerance” towards bug counts.
Teams have solved the problem of how to handle defects in different ways.
Some teams put all their bugs on task cards
Some teams chose to write a cared, estimate it & schedule it as a story.
Some teams suggest adding a test for every bug
The more bugs you can fix immediately, the less technical debt your application generates and the less ‘defect’ inventory you have.
Try making the estimate for each story to include (atleast) two hours or half a day for fixing associated bugs.
If a bug is really missed functionality, choose to write a card for the bug and schedule it as a story.
Code produced test-first is fairly free of bugs by the time it is checked-in.
The Daily Stand-Up helps teams maintain the close communication they need.
Use Big, visible charts such as story boards, Burndown charts and other visual cues to help keep focus and know your status.
Having story boards gives your team focus suring the stand-ups or when you are talking to someone outside the team about your progress.
Testers can help keep the iteration progressing smoothly by helping make sure everyone is communicating enough. They can help programmers and customers find a common language.
Use retrospectives to evaluate whether collaboration & communication need improving and brainstorm ways to improve.
Teams in different locations have to make a special effort to keep each other informed.
Teams take different approaches to make sure their build stays ‘green’.
The build needs to provide immediate feedback, so Keep It Short.
Tests that take too long, such as tests that update the database, functional tests above Unit level or GUI test scripts, should run in a separate build process.
Having a separate, continual ‘Full’ build with all of the regression suites is worth the investment.
During the iteration, you are automating new tests. As soon as these pass, add them to the Regression Suite.
As you start the iteration, make sure that test environments, test data, and test tools are in place to accommodate testing.
You may have brought in outside resources for the iteration to help with performance, security, usability or other forms of testing. Include them in stand-ups and discussions. Pair with them to help them understand the team’s objectives. This is an opportunity to pick up new skills!!
Consider what metrics you need during the iteration – Progress and Defect Metrics are 2 examples.
Whatever metrics you choose to measure – Go for Simplicity!
Testers bring a different viewpoint to planning and estimation meetings. They need to be a part of the story sizing process.
The team needs to develop in small, testable chunks in order to help decide what stories are tentatively planned for which iteration. They keyword being ‘testable’.
If there are stories that present a big testing challenge, it might be good to do those early on.
Release Planning is the time to start asking for examples and use cases of how the features will be used, and what value they’ll provide. Drawing flowcharts or sample calculations on white board can help pinpoint the core functionality.
The agile tester thinks about how each story might affect the system as a whole or other systems that ours has to work with.
In agile development, Test Plan must be concise and lightweight., assessing testing issues, including risk analysis and identifying assumptions. The biggest benefit of test planning is the Planning itself.
This chapter shows examples of lightweight agile Test Plans created by Lisa and Janet that are very useful! Here is my take on creating a simplistic agile test plan using a mind-map-
The chapter discusses about Task Boards and how they can be leveraged. Here is my take on using task boards by agile teams that I wrote a few months back –
Agile metrics are key to measuring the team’s progress. Plan for what metrics you want to capture for the life of the release, think about what problem you are trying to sove and capture only those metrics that are meaningful for your team.
Here is something I wrote about useful and not-so-useful Agile metrics-
Don’t get caught up with committing to your plans- the situation is bound to change. Instead, prepare for doing the right activities and getting the right resources in time to meet the customer’s priorities!
“Critiquing the Product Using Technology-Facing Tests”
Technology-facing tests that critique the product are more concerned with the non-functional aspects – deficiencies of the product from a technical point of view.
We describe requirements using a programming domain vocabulary. This is the main of Quadrant-4 of our Agile Testing Quadrants.
Customers simply assume that software will be designed to properly accommodate the potential load, at a reasonable rate of performance. It doesn’t always occur to them to verbalize those concerns.
Tools, whether home-grown or acquired, are essential to succeed with Quadrant 4 testing efforts.
“Many teams find that a good technical tester or toolsmith can take on many of these tasks.”
Take a second look at the skills that your team already posseses, and brainstorm about the types of “ility” testing that can be done with the resources you already have. If you need outside teams, plan for that in your release and iteration planning.
The information these (Quadrant-4) tests provide may result in new stories and tasks in areas such as changing the architecture for better scalability or implementing a system-wide security solution. Be sure to complete the feedback loop from tests that critique the product to tests that drive changes that will improve the non-functional aspects of the product.
When Do you Do it?
Technical stories can be written to address specific requirements.
Consider a separate row on your story board for tasks needed by the product as a whole.
Find a way to test them early in the project
Prioritize stories such that a steel thread or a thin slice is complete early, so that you can create a performance test that can be run and continued as you add more functionality.
The time to think about your non-functional tests is during release or theme planning.
The team should consider various types of “ility” testing including – Security, maintainability, Interoperability, Compatibility, Reliability and Installability – and should execute them at appropriate times.
Performance, Scalability, Stress and Load tests should be done from the beginning of the project.
Critiquing or evaluating the product is what business users or tester do when they assess and make judgement about the product.
These are the tests performed in Quadrant 3 of our Agile Testing Quadrants
It is difficult to automate Business facing tests that critique the product, because such testing relies on human intellect, experience, and insight.
You won’t have time to do any Quadrant 3 tests if you haven’t automated tests in Quadrants 1 and 2.
Evaluating or critiquing the product is about manipulating the system and trying to recreate the actual experience of end users.
Show customers what you are developing early & often.
End-of-iteration demos are important to see what has been delivered and revise priorities
Rather than just waiting for end of sprint demos, use any opportunity to demonstrate changes as you go.
Choose a frequency of demos that works for your team. Informal demos can be more productive
Scenario Testing – Business users can help define plausible scenarios & workflows that can mimic end user behavior
Soap Opera Testing – Term coined by Hans Buwalsa (2003) can help the team understand business & user needs. Ask “What’s the worst thing that can happen, and how did it happen?”
As an investigative tool, it is a critical supplement to the story tests and our automated regression suite.
Sophisticated, thoughtful approach to testing without a script, combining learning, test design and test execution
There are 2 types of usability testing. The first is done up front by user experience folks, using tools such as wire frames to drive programming. These are part of Quadrant 2.
The second type talks about the kind of usability testing that critiques the product. We use tools such as User Personas and our Intuition to help us look at the product with the end user in mind.
Instead of just thinking about testing interfaces, we can also look at APIs and consider attacking the problem in other ways and consider tools like simulators & emulators.
User manuals & online help need validation just as much as software. Your team may employ specialists like technical writers who create & verify documentation. The entire team is responsible for the quality of documentation.
On top of that I get to present not one but 2 talks!! My topics are
“The What, When & How of Test Automation” 45 mins
In this I will talk about preparing robust automation strategies. Agile means pace and agile means change. With frequent time boxed releases and flexible requirements, test automation faces numerous challenges. Haven’t we all asked what to automate and how to go about the daily tasks with the automation cloud looming over our heads. Here we’ll discuss answers to some of these questions and try to outline a number of approaches that agile teams can take in their selection of what to automate, how to go about their automation and whom to involve, and when to schedule these tasks so that the releases are debt free and of best quality.
“Gamify your Agile workplace” 15 mins
In this I’ll present live some innovation games and have audience volunteers engage and play games based on known scenarios. Let’s Play and learn some useful Innovation Games that can help you gamify your agile team and workplace, making the team meetings shorter and communication more fun!
Both these topics are close to my heart and I am looking forward to sharing my thoughts with a wider audience.
I am also excited to meet all the awesome speakers at the event , as well as get to know the fantastic team of organizers behind this event!
When I first heard about risk-based testing, I interpreted it as an approach that could help devise a targeted test strategy. Back then I was working with a product-based research and development team. We were following Scrum and were perpetually working with tight deadlines. These short sprints had lots to test and deliver, in addition to the cross-environment and non-functional testing aspects.
Learning about risk-based testing gave me a new approach to our testing challenges. I believed that analyzing the product as well as each sprint for the impending risk areas and then following them through during test design and development, execution and reporting would help us in time crunches.
But before I could think about adopting this new found approach into our test planning, I had a challenge at hand: to convince my team.
In my recent article published at Gurock’s blog site , I have written about my experience on exploring risk based testing and convincing my agile team about its importance and relevance using their own sprints’ case study.
Using the analysis of a sprint’s user stories, calculating Risk Priority Number (RPN) and the Extent of Testing defined, I was able to showcase in my own team’s case study, ways our testing could benefit and better itself by following risk based approach in a simplified manner.