Use the Agile Test Quadrants to help you identify the different types of test automation tools you might need for each project, even each iteration.
Test Automation Pyramid (introduced by Mike Cohn)
Lowest Layer- Bulk of automated unit , technology facing tests. Quickest feedback, code much more quickly using xUnit family of tools
Middle layer – Automated business-facing tests that help the team. “Are we building the right thing” Tests operate at the API, behind the GUI level. Bypass the presentation layer – less expensive to write & maintain these tests. Fit & FitNesse are good examples, written in domain language
Top Tier – Should be the smallest automation effort as they have the lowest ROI. Done through GUI, operating on the presentation layer. More expensive to write, more brittle and need more maintenance.
Any tedious or repetitive task involved in developing software is a candidate for automation.
AN automated deployment process is imperative – getting automated build emails listing every change made is a big help to testers. It speeds up testing & reduces errors.
A fast running continuous integration and build process gives the greatest ROI of any automation effort.
Another useful area for automation is data creation or setup. Cleaning up test data is as important as generating it. You data creation toolkit should include ways to tear down the test data so it doesn’t affect a different test or prevent rerunning the same test.
What we shouldn’t automate
Tests that will never fail
Plan-in plenty of time for evaluating tools, setting up build processes, and experimenting with different test approaches in the initial iterations.
If management is reluctant to give the team time to implement automation, explain the trade-offs clearly. Work towards a compromise.
We will always have deadlines, and we always feel pressed for time. There is never enough time to go back and fix things. During your next planning meeting, budget time to make meaningful progress on your automation efforts.
Good test management ensures that tests can provide effective documentation of the system and of the development progress
“Why we Want to Automate Tests and What holds us back”
Test automation is a core agile practice. Agile projects depend on automation.
Manual tests take too long and are error prone.
Automation regression tests provide a safety net. They give feedback early & often.
Automated builds, deployment, version control and monitoring also go a long way toward mitigating risk and making your development process more consistent.
“Build Once, Deploy to Many” – is a tester’s dream!
Projects succeed when good people are free to do their best work. Automating tests appropriately makes that happen!
If we become complacent about our testing challenges and depend solely on automated tests to find our issues, and then just fix them enough for the test to pass – we do ourselves a disservice.
However, if we use the tests to identify problem areas and fix them the right way or refactor as needed, then we are using the safety net of automation in the right way.
When tests that illustrate examples of desired behavior are Automated, they become ‘living’ documentation of how the system actually works.
Barriers to Automate –
Programmer’s attitude – Why automate at all
The Hump of Pain – the initial learning curve
Fear – Non-programming testers fear they have nothing to contribute
Old habits, team culture
Without automated regression tests, manual regression testing will continue to grow in scope and eventually may simply be ignored.
Teams with automated tests and build processes have a more stable velocity.
Good test design practices produce simple, well-designed, continually refactored, maintainable tests.
Team culture & history may make it harder for programmers to prioritize automation of business-facing tests than coding new features. Using Agile principles & values helps the whole team overcome barriers to test automation.
This chapter reviews all the four Agile Testing Quadrants by illustrating an example of a team’s success story in testing their whole system using a variety of home-grown and open source tools.
The system is related to Monitoring of Remote Oil and Gas Production Wells. The software application had a huge legacy system, with very few unit tests. The team was slowly rebuilding the application using new technology. And describes how they used tests from all four quadrants to support them.
Using Test Driven Development and Pair Programming wholeheartedly. Also adding unit tests and refactoring all legacy code they encountered on the way.
The product engineer writing acceptance tests and sharing with the developers and testers before they began creating.
Automation involving functional test structure, web services and embedded testing
Exploratory testing to supplement the automated tests to find critical issues.
Don’t forget to Document… but only what is useful
Finding ways to keep customers involved in all types of testing, even if they are remote. Have UATs and end to end tests
Use lessons learnt during testing to Critique the product in order to drive the development in next iterations
“Critiquing the Product Using Technology-Facing Tests”
Technology-facing tests that critique the product are more concerned with the non-functional aspects – deficiencies of the product from a technical point of view.
We describe requirements using a programming domain vocabulary. This is the main of Quadrant-4 of our Agile Testing Quadrants.
Customers simply assume that software will be designed to properly accommodate the potential load, at a reasonable rate of performance. It doesn’t always occur to them to verbalize those concerns.
Tools, whether home-grown or acquired, are essential to succeed with Quadrant 4 testing efforts.
“Many teams find that a good technical tester or toolsmith can take on many of these tasks.”
Take a second look at the skills that your team already posseses, and brainstorm about the types of “ility” testing that can be done with the resources you already have. If you need outside teams, plan for that in your release and iteration planning.
The information these (Quadrant-4) tests provide may result in new stories and tasks in areas such as changing the architecture for better scalability or implementing a system-wide security solution. Be sure to complete the feedback loop from tests that critique the product to tests that drive changes that will improve the non-functional aspects of the product.
When Do you Do it?
Technical stories can be written to address specific requirements.
Consider a separate row on your story board for tasks needed by the product as a whole.
Find a way to test them early in the project
Prioritize stories such that a steel thread or a thin slice is complete early, so that you can create a performance test that can be run and continued as you add more functionality.
The time to think about your non-functional tests is during release or theme planning.
The team should consider various types of “ility” testing including – Security, maintainability, Interoperability, Compatibility, Reliability and Installability – and should execute them at appropriate times.
Performance, Scalability, Stress and Load tests should be done from the beginning of the project.
Many software teams were forced to work remotely because of the onset of the global pandemic of COVID-19. Months in, most teams have now found their pace and made their peace with it. Hopefully, you’ve gotten comfortable and set a routine for yourself in your new work-from-home setup.
But are you engaging enough with your colleagues? Or are your conversations limited to virtual meetings and video calls? It’s important to have other ways of staying connected with your team.
As an organization, it is important to realize that however close-knit or small your team may be, not having a proper open channel of communication may make people feel out of the loop. In my article published at Testrail blog, I have discussed some tips on how to keep yourself and your team engaged when working from home.
Bump up communication
Before, it may have been enough for the manager to have one-on-one conversations with team members once a month, but our new remote situation calls for a little more. Managers should increase the frequency as well as the quality of the conversations they have with their teams. Strive to understand what teams are struggling with, remove their impediments, and ensure a smooth workday for each person.
As a team member, you too now have the responsibility to stay in touch with your peers more often. It is important for you to participate more in conversations with others rather than just being a spectator in your group chats or calls. A simple greeting and an update about what you are working on today is a good start that helps others peek into your day, and it increases the chances of their doing the same.
Provide clear directions
Managers and leaders need to focus now more than ever on setting up open lines of communication within teams. Give clear directions about what is expected of everyone, share what you feel about their work in the form of constructive feedback, and ask them their opinions. It is important to be empathetic and understanding and to have a listening ear.
In the middle of this chaos, it is important for people to have specific instructions, tasks, and goals so that they can focus on achieving objectives and have some structure to their days. Achieving these small tasks will make their time more productive and motivate them to get more done!
Critiquing or evaluating the product is what business users or tester do when they assess and make judgement about the product.
These are the tests performed in Quadrant 3 of our Agile Testing Quadrants
It is difficult to automate Business facing tests that critique the product, because such testing relies on human intellect, experience, and insight.
You won’t have time to do any Quadrant 3 tests if you haven’t automated tests in Quadrants 1 and 2.
Evaluating or critiquing the product is about manipulating the system and trying to recreate the actual experience of end users.
Show customers what you are developing early & often.
End-of-iteration demos are important to see what has been delivered and revise priorities
Rather than just waiting for end of sprint demos, use any opportunity to demonstrate changes as you go.
Choose a frequency of demos that works for your team. Informal demos can be more productive
Scenario Testing – Business users can help define plausible scenarios & workflows that can mimic end user behavior
Soap Opera Testing – Term coined by Hans Buwalsa (2003) can help the team understand business & user needs. Ask “What’s the worst thing that can happen, and how did it happen?”
As an investigative tool, it is a critical supplement to the story tests and our automated regression suite.
Sophisticated, thoughtful approach to testing without a script, combining learning, test design and test execution
There are 2 types of usability testing. The first is done up front by user experience folks, using tools such as wire frames to drive programming. These are part of Quadrant 2.
The second type talks about the kind of usability testing that critiques the product. We use tools such as User Personas and our Intuition to help us look at the product with the end user in mind.
Instead of just thinking about testing interfaces, we can also look at APIs and consider attacking the problem in other ways and consider tools like simulators & emulators.
User manuals & online help need validation just as much as software. Your team may employ specialists like technical writers who create & verify documentation. The entire team is responsible for the quality of documentation.
Writing defect reports is a constant part of a tester’s daily life, and an important one too! How you report the bugs you find plays a key role in the fate of the bug and whether it’s understood and resolved or ends up being deferred or rejected.
It is imperative for every tester to communicate the defects they find well. In my article published at TestRail blog, I discuss four simple tips to help you write better bug reports.
Study past bug reports
If you are new to your team or new to testing itself, the best way to learn about good bug reporting is to read the team’s past bug reports. Try to understand the bugs and see if you could reproduce them.
By doing this exercise you learn the best way to present bugs so that the team can understand them. You’ll get a feel for the business language and the project’s jargon to be able to describe features and modules.You may also see some imperfections in the past reports, so you can think about how to improve them and what other information would be useful to include.
Create your own game plan
Create a shortcut for yourself, like writing down a summary or title of each bug you find as you go about testing and saving the screenshot. When you get down to reporting, you can quickly fill out the steps, as well as expected and actual results, and attach the saved screenshots. Doing this could be faster and save you the effort of repeating steps just to get the needed screenshots and logs. Continue Reading–>
“Toolkit for Business-Facing Tests that Support the Team”
As agile development has gained in popularity, we have more and more tools to help us capture and use them to write executable tests.
Your strategy for selecting the tools you need should be based on your team’s skill set, the technology your application uses, your team’s automation priorities, time, and budget constraints. Your strategy should NOT be based on the latest and coolest tool in the market.
In agile, simple solutions are usually best.
Some tools that can help us illustrate desired behavior with examples, brainstorm potential implementations and ripple effects and create requirements we can turn into tests are—
Software based tools
A picture is worth a thousand words, even in agile teams. Mock-ups show the customer’s desires more clearly than a narrative possibly could. They provide a good focal point to discussing the desired code behavior.
Visuals such as flow diagrams and mind maps are good ways to describe an overview of a story’s implementation, especially if it is created by a group of customers, programmers, and testers.
Tools such as Fit (Framework for Integrated Tests) and FitNesse were designed to facilitate collaboration and communication between the customer and development teams.
Finding the right electronic tools is particularly vital for distributed teams (chat, screensharing, video conferencing, calling, task boards etc.)
Selenium, Watir and WebTest are some examples of many open source tools available for GUI testing.
Home-Brewed Test Automation Tools
Bret Pettichord (2004) coined the term ‘home-brewed’ for tools agile teams create to meet their own unique testing needs. This allows even more customisation than an open source tool. They provide a way for non technical customer team members to write tests that are actually executable by the automated tool. Home-brewed tools are tailored to their needs, designed to minimize the total cost of ownership and often built on top of existing open source tools.
The best tools in the world won’t help if you don’t use them wisely. Test tools might make it very easy to specify tests, but whether you are specifying the right tests at the right time is up to you.
Writing detailed test cases that communicate desired behavior is both art and science.
Whenever a test fails in Continuous Integration (CI) and build process, the team’s highest priority should be to get the build passing again. Everyone should stop what they are doing and make sure that the build goes ‘green’ again. Determine if a bug has been introduced, or if the test simply needs to be updated to accommodate intentionally changed behavior. Fix the problem, check it in, and make sure all tests pass!
Experiment – so that you can find the right level of detail and the right test design for each story.
Keep your tests current and maintainable through refactoring.
Not all code is testable using automation but work with programmers to find alternative solutions to your problems.
Manual test scenarios can also drive programming if you share them with the programmers early. The earlier you turn them into automated tests, the faster you will realise the benefit.
Start with a simple approach, see how it works, and build on it. The important thing is to get going writing business-facing tests to support the team as you develop your product.
Planning and developing new features at the fast pace of agile is a hard game. Knowing when you are really done and ready to deliver is even harder.
Having predetermined exit criteria helps you be able to make the decision that a feature is truly ready to ship. In my article published at TestRail Blog, I compiled a list of exit criteria you must add to your user story to make it easy to bring conformity and quality to all your features.
All Tasks Are Completed
This first one sounds obvious, but it may not be. I still see many teams struggling with getting their testing done within the sprint. Developers work on a user story and deem it done, while testers are left to play catch-up in the next sprint.
Put that practice to an end once and for all by making sure that no user story can be proclaimed done without having all tasks under it completed, including development tasks, testing tasks, design and review tasks, and any other tasks that were added to the user story at the beginning.
Ensuring all tasks are completed in a sprint also mandates that you begin thinking in depth about each user story and the tasks necessary for each activity to be completed, so that you do not miss out on anything at the end.
Tests Are Automated Whenever Possible
As our agile teams move toward continuous delivery and adopting DevOps, our testing also needs to be automated and made a part of our pipelines. Ensuring that test automation gets done within the sprint and is always up to pace with new features is essential.
By having test automation tasks be a part of a user story delivery, you can keep an eye out for opportunities to automate tests you are creating, allocate time to do that within the sprint, and have visibility of your automation percentages.
I have used the following exit criteria:
At a minimum, regression tests for the user story must be added to the automation suite
At least 50% of tests created for the user story must be automated
Automated regression must be run at least once within the sprint
Depending on what your automation goals are, decide on a meaningful standard to apply to all your user stories.
Exploration is an integral part of testing. Exploring the application is a great strategy for learning about how it works, finding new information and flows, and discovering some unique bugs too!
Many testers perform exploratory testing as a matter of course, and agile teams may make it an integral part of their tasks. But how can you up your exploration game? Simply going around the application and looking or clicking here and there surely cannot be called creative exploration.
As we get into the flow of agile and its fast-moving sprints, we focus on testing tasks for each user story and are constantly thinking of what needs to be done next. But with minimal documentation and limited time to design tests, it is imperative to understand that just executing the written or scripted tests will not be enough to ensure the feature’s quality, correctness, and sanity.
Exploratory testing needs to be counted as a separate task. You can even add it to your user story so that the team accounts for the time spent on it and recognizes the effort.
Testers can use the time to focus on the feature at hand and try out how it works, its integrations with other features, and its behavior in various unique scenarios that may or may not have been thought of while designing the scripted tests. Having exploratory testing as a task also mandates that it be done for each and every feature and gives testers that predefined time to spend on exploration.
In my testing days, this used to be the most creative and fun aspect of my sprints, and it resulted in great discoveries, questions, insights, and defects!