Are your Test cases really effective?

Test teams are forever designing and adding new tests, running them, and reporting results. But is your test team creating tests that are effective at finding real problems?

How do you know if your tests are actually working, and not just adding to the ever-increasing test count?

In my article published on the Testrail blog site, I discuss some ways you can gauge the effectiveness of your tests — and improve them.

Defects Found

The top and most obvious indicator of the effectiveness of your test cases is the defects you find when executing them. As you and your team execute the designed test cases, constantly ask yourself these questions:

  • Are these tests guiding me toward defects?
  • Am I finding problems with the predefined test cases? Or do I have to do more exploration before even getting close to a problem?
  • Are these tests exercising unique flows or use paths?

Metrics

You can also look at your defect lists and find related test cases for the defect logged (if you have that ability in your defect management system). This interlinking helps the team understand what test cases led to the issues found.

You can then further analyze whether that test case was created during test design or later added to the list when the issue was found.

Exploration

If your test cases are not effective, you will not find any useful bugs in test execution. That will mean most of your time is spent in unplanned exploration or ad hoc testing. So, by looking at the time spent in actual test execution versus the time spent on ad hoc testing, you can figure out the effectiveness of the test cases you designed.

If your test cases are effective, you will find issues, explore more use paths, navigate through different integrations with other features, and test different aspects of the same functionality.

If at the end of your test execution, you feel that you have not done all of that, you can infer that is because your test cases might be too simplistic or obvious, and therefore not effective enough to find any useful bugs.

History

Continue Reading here–>

How to Decide if You Should Automate a Test Case

Test automation is imperative for the fast-paced agile projects of today. Testers need to continuously plan, design and execute automated tests to ensure the quality of the software. But the most important task is to decide what to automate first. 

In my article published at the Gurock Blog website, I have have compiled a list of questions to help you prioritize what you should automate next and guide your test automation strategy.

Think of this like a checklist that helps you make automation decisions quickly and effectively and create a standard process around them for your team to follow. Here are the list of questions to ask yourself.

Do check out the complete article for a detailed discussion on each of these-

Is the test going to be repeated?

Is it a high-priority feature?

Do you need to run the test with multiple datasets or paths? 

Is it a Regression or Smoke Test?

Does this automation lie within the feasibility of your chosen test automation tool?

Is the area of your app that this is testing prone to change?

Is it a Random Negative Test?

Can these tests be executed in parallel, or only in sequential order?

Are you doing it only for the reports?

Test automation tools will provide you with useful insights into the quality of the software that you can showcase with the use of some insightful reports. But are these reports the only reason you are looking at automation? Just looking at the red or green status results of the test reports might not be the best way to assess the software quality. You will need to spend time analyzing the tests that failed, why they failed, and what needs to be corrected. Tests created once will need maintenance and continuous monitoring to keep them up to date. All of that needs to be kept in mind and the effort needs to be accounted for.

There is more to test automation than just the fancy reports!

Looking at the questions above, analyse the state of your test case, the intent behind its automation, and its feasibility, as well as the value that you might get out of it. Hope that helps you decide what tests you should or should not be picking for automation!

<Image credits – https://unsplash.com/photos/FlPc9_VocJ4 >

Ways to Generate Quick Test Ideas

As testers, we look at everything with a critical eye. As soon as something comes up for testing, our instinct is to get down to examining it and looking for problem areas. After getting a written test script, a new tester would be tempted to begin executing scripted tests right away.

But stopping to gather our thoughts about possible test ideas first is a smarter approach that leads to better, more unbiased test coverage. However, we don’t always have a lot of time to imagine scenarios and different paths. Luckily, there is always some planning we can do beforehand.

In my article published at Gurock Testrail blog I shared some tips for generating test ideas in a time crunch.

Revisit classic test techniques

Our old, trusted test design techniques like boundary value analysis, equivalence class partitions, decision tables, and state flow diagrams are always a help when thinking about test cases. Although most of them are ingrained in the thought process of a tester and are mostly common sense, giving them a revisit, however informal, may still give us some more test ideas.

For example, creating a quick decision table for the interaction of two or more variables to observe the behavior of the system may reveal some unique combination that we might have missed. Or a quick boundary value analysis for the age field in our we form may show us a special case we might have missed.

Similarly, using state transition diagrams to draw end-to-end flows can help not only the testers, but also the developers in imagining the overall system flow and revealing problem areas.

Look at the history

The history of the project or the system can give us many insights into what we are dealing with, where the common defect clusters are, and the most problematic components.

If you are new to the test team, start by having a look at the defect tracking from past sprints or releases. You can then define and think of more test cases based on past defects and the components that have had the greatest number of defects.

If you’ve been part of the team for a while, you are probably intuitively bound to focus on these areas. But even then, it will help to consciously make an effort to list the most common types of bugs encountered and most problematic areas based on your experience. This will help not only you, but also your new and junior team members. Read full post->

Read More »

4 Tips to Create a Simplified Test Plan for Your Agile Project

Planning is an integral part of initiating any project or activity, including software testing. Although formal test plans may seem outdated and unnecessary to most fast-paced agile teams that prioritize cutting down on documentation, it’s still a good idea to keep a pared-down test plan that can guide the testing effort throughout the project.

In my article published on TestRail blog, I talk about four useful tips to create a simplified test plan for your agile project.

1. Include the basics

A test plan can be as detailed as the team requires, but at the minimum, it must contain the context, timelines, and a brief overview of the testing activities expected to be performed in the project, release, or sprint.

Based on what level of test plan you are creating, begin with a basic heading, like schedule and timelines, testing activities and types of testing expected to be performed, people involved in testing, or any new hires or training required for the project or release.

2. Build off the project plan

Most of the test plan can be derived from the context set by the overall project plan, so give that a thorough read and get the details of the project lifecycle, expected delivery schedule, interaction points with other teams, etc.

Note any specifics that emerge, like shared resources with other teams, sync points for integration tests throughout the project’s lifecycle, or special types of testing needed, like performance, load or security testing.

3. Define a clear scope

Let’s say your project is a mobile application that needs to work on both Android and iOS. It would be beneficial to list all the environments and OS versions that need to be supported. Including them in the test plan ensures you set up the test environments early and allocate testing tasks across all of them. It also ensures that you are not bogged down by repeated testing on numerous test environments at the time of release. What is not defined in the initial test plan can automatically be considered out of scope.

Continue Reading ->

4. Use visual tools like mind maps

A simple Agile Test Plan using a Mind map

Read full article here

Is Test Automation Alienating Your Business Testers?

With numerous test automation tools and frameworks available today, many in the software testing industry are focused on learning them all. It is important to stay updated with new technology. But are testers losing something in the race to become more technical and equipped with automation skills?

In my article published at TestRail blog, I examine ways to see if your test automation is becoming so technical and code-intensive that it’s in danger of alienating the subject-matter expert testers who best know the core of your business?

Technology should serve people

It is important to understand and remember that test automation tools have been designed to make testers’ lives easier and better. They are not intended to replace testers or overpower them. They make tests execute faster, with more accuracy and fewer errors, so if they eliminate anything, it is redundancy and repetitive work. This technology is meant to serve testers — to save their time and effort and give them more freedom.

To this end, the first intent behind adopting any technology must be its fitness for use in the project, not its popularity in the market. The skills needed to adopt the tool and begin using it in the project should be easily obtained by hands-on learning or training. Read full article ->

Testing is creative

Testing is a creative job, and it always has been. The advent of new tools and technology has not changed this fact. Tools can do part of a tester’s job, but they still cannot test. Although some people may argue on behalf of artificial intelligence and machine learning that can take over many actively creative aspects, we are not there yet. We still want and need a human to capture the creative tests, discuss the pros and cons of design aspects, peer-review test cases, and report problems.

Everyone can contribute to test automation

When we look at testers’ resumes, the tendency is to look for tools they can work with. But the more important skill we need is their ability to contribute to test automation in one way or another. We cannot judge this fact just by asking if a person is able to write test automation scripts or knows a certain programming language. They may be able to learn the Gherkin format to design and write feature files for Cucumber tests. Or if you decide to adopt a keyword-driven framework, they could pick up the keywords and begin writing tests so that the same test cases can double as test scripts.

Read More »

Read Along- ‘Agile Testing’ Chapter-17

“Iteration Kickoff”

  • Most teams kickoff their new iteration with a planning session. – where they discuss one story at a time, writing & estimating all of the tasks needed to implement it.
  • Task cards need to be written along with development task cards and estimated realistically.
  • When writing programming task cards, make sure that coding task estimates include time for writing unit tests and for all necessary testing by programmers.
  • Testers should help make sure that all necessary cards are written and they have reasonable estimates.

Your job as a tester is to make sure enough time is allocated to testing and to remind the team that testing & quality are the responsibility of the whole team. When the team decides how many stories they can deliver in an iteration, the question isn’t “How much coding can we finish?” but “How much coding and testing can we complete?”

Commit Conservatively – It is always better to bring in another story later than to drop a picked story.

  • Working closely with customers or customer proxies is one of the most important activities as an agile tester. Good communication usually takes work.
  • We want “big-picture” tests to help the programmers get started in the right direction on a story. High level tests should convey the main purpose behind the story.
  • Don’t forget to ask the programmers what they think you might have missed. What are the high-risk areas of the code? Where do they think testing should be focused?

When Testability is an issue, make it the team’s problem to solve.

One beneficial side-effect of reviewing the tests with programmers is the cross-learning that happens.

High level test cases along with executable tests you’ll write during the iteration will form the core of the application’s documentation.

People unfamiliar with agile development often have the misconception that there’s no documentation. In fact, agile projects produce usable documentation that contains executable tests and thus, is always up to date.

Read Along- ‘Agile Testing’ Chapter-15

“An Iteration in the life of a Tester”

  • Testers bring a different viewpoint to planning and estimation meetings. They need to be a part of the story sizing process.
  • The team needs to develop in small, testable chunks in order to help decide what stories are tentatively planned for which iteration. They keyword being ‘testable’.
  • If there are stories that present a big testing challenge, it might be good to do those early on.

Release Planning is the time to start asking for examples and use cases of how the features will be used, and what value they’ll provide. Drawing flowcharts or sample calculations on white board can help pinpoint the core functionality.

  • The agile tester thinks about how each story might affect the system as a whole or other systems that ours has to work with.
  • In agile development, Test Plan must be concise and lightweight., assessing testing issues, including risk analysis and identifying assumptions. The biggest benefit of test planning is the Planning itself.

This chapter shows examples of lightweight agile Test Plans created by Lisa and Janet that are very useful! Here is my take on creating a simplistic agile test plan using a mind-map-

Agile Test Plan – using a Mind Map

The chapter discusses about Task Boards and how they can be leveraged. Here is my take on using task boards by agile teams that I wrote a few months back –

https://testwithnishi.com/2019/07/25/4-ways-task-boards-can-help-agile-teams/

Agile metrics are key to measuring the team’s progress. Plan for what metrics you want to capture for the life of the release, think about what problem you are trying to sove and capture only those metrics that are meaningful for your team.

Here is something I wrote about useful and not-so-useful Agile metrics-

https://testwithnishi.com/2019/12/04/metrics-your-agile-team-should-should-not-be-tracking/

Don’t get caught up with committing to your plans- the situation is bound to change. Instead, prepare for doing the right activities and getting the right resources in time to meet the customer’s priorities!

Read Along- ‘Agile Testing’ Chapter-13

“Why we Want to Automate Tests and What holds us back”

  • Test automation is a core agile practice. Agile projects depend on automation.
  • Manual tests take too long and are error prone.
  • Automation regression tests provide a safety net. They give feedback early & often.
  • Automated builds, deployment, version control and monitoring also go a long way toward mitigating risk and making your development process more consistent.

“Build Once, Deploy to Many” – is a tester’s dream!

Projects succeed when good people are free to do their best work. Automating tests appropriately makes that happen!

If we become complacent about our testing challenges and depend solely on automated tests to find our issues, and then just fix them enough for the test to pass – we do ourselves a disservice.

However, if we use the tests to identify problem areas and fix them the right way or refactor as needed, then we are using the safety net of automation in the right way.

When tests that illustrate examples of desired behavior are Automated, they become ‘living’ documentation of how the system actually works.

Barriers to Automate –

  • Programmer’s attitude – Why automate at all
  • The Hump of Pain – the initial learning curve
  • Initial Investment
  • Fear – Non-programming testers fear they have nothing to contribute
  • Legacy code
  • Old habits, team culture
  • Without automated regression tests, manual regression testing will continue to grow in scope and eventually may simply be ignored.
  • Teams with automated tests and build processes have a more stable velocity.
  • Good test design practices produce simple, well-designed, continually refactored, maintainable tests.
  • Team culture & history may make it harder for programmers to prioritize automation of business-facing tests than coding new features. Using Agile principles & values helps the whole team overcome barriers to test automation.

4 Exit Criteria your User Stories must have

Planning and developing new features at the fast pace of agile is a hard game. Knowing when you are really done and ready to deliver is even harder.

Having predetermined exit criteria helps you be able to make the decision that a feature is truly ready to ship. In my article published at TestRail Blog, I compiled a list of exit criteria you must add to your user story to make it easy to bring conformity and quality to all your features.

All Tasks Are Completed

This first one sounds obvious, but it may not be. I still see many teams struggling with getting their testing done within the sprint. Developers work on a user story and deem it done, while testers are left to play catch-up in the next sprint.

Put that practice to an end once and for all by making sure that no user story can be proclaimed done without having all tasks under it completed, including development tasks, testing tasks, design and review tasks, and any other tasks that were added to the user story at the beginning.

Ensuring all tasks are completed in a sprint also mandates that you begin thinking in depth about each user story and the tasks necessary for each activity to be completed, so that you do not miss out on anything at the end.

Tests Are Automated Whenever Possible

As our agile teams move toward continuous delivery and adopting DevOps, our testing also needs to be automated and made a part of our pipelines. Ensuring that test automation gets done within the sprint and is always up to pace with new features is essential.

By having test automation tasks be a part of a user story delivery, you can keep an eye out for opportunities to automate tests you are creating, allocate time to do that within the sprint, and have visibility of your automation percentages.

I have used the following exit criteria:

  • At a minimum, regression tests for the user story must be added to the automation suite
  • At least 50% of tests created for the user story must be automated
  • Automated regression must be run at least once within the sprint

Depending on what your automation goals are, decide on a meaningful standard to apply to all your user stories.

Read More »

Using a Combination of Scripted, Automated and Exploratory Testing for Optimum QA Coverage

Most test teams today are struggling to find better ways to handle their testing. With the advent of Agile in our software development processes, teams are perennially under pressure to provide faster releases without lowering their standards of quality. This, in turn, adds load on the in-house test teams to ensure finding more and crucial issues and to prevent defect leakage. For this reason, testers look at strategies and practices that can help them achieve their goals and add more value to the product’s quality.

In my opinion as a hands-on agile tester, there is no single silver bullet to quality, but a combination of different types and approaches to testing that can help us get closer to our quality goals. Test teams need to strategize and plan the usage of a combination of scripted tests, automated tests as well as exploratory tests for achieving an optimum coverage and best quality software.

Here is my latest article for PractiTest QA Learning Centre where I discuss the need to have a combination of scripted, automated as well as exploratory tests for an optimum QA coverage–

https://www.practitest.com/qa-learningcenter/thank-you/exploratory-testing-optimum-qa-coverage/ 

Scripted Tests

When we look at the typical test approach, it begins with test scripting and designing tests as per software functionality. These are created using requirement analysis and test design techniques and also using common sense and skills by our skilled testers. These scripted tests form the starting point of testing a new feature, change or addition in the software.

Automated Testing

In addition to running the scripted tests manually, testers also rely on automated tests. These tests are scripted using various test automation tools and test automation, i.e. ability to write these automated test scripts is, thus, a much-wanted skill nowadays for all test professionals. The ability to run some tests using automated scripts helps repeatability and saves a lot of time and effort on part of the test teams. But most importantly, by automating the drudgery away, it saves the tester from repeated manual laborious tests and frees up their time for more creative thinking and exploration around the application.

Exploratory Testing

Exploration of software is basically looking at the feature/functionality/change and overall behavior from a learning as well as a critical standpoint. Exploratory Testing is a crucial aspect of software testing, which almost every tester performs knowingly or subconsciously.

Cem Kaner coined the term Exploratory Testing in his book “Testing Computer Software” and described it as:

“Simultaneous test design, test execution and learning with an emphasis on learning”

https://www.practitest.com/qa-learningcenter/thank-you/exploratory-testing-optimum-qa-coverage/ 

Read More »