Are your Test cases really effective?

Test teams are forever designing and adding new tests, running them, and reporting results. But is your test team creating tests that are effective at finding real problems?

How do you know if your tests are actually working, and not just adding to the ever-increasing test count?

In my article published on the Testrail blog site, I discuss some ways you can gauge the effectiveness of your tests — and improve them.

Defects Found

The top and most obvious indicator of the effectiveness of your test cases is the defects you find when executing them. As you and your team execute the designed test cases, constantly ask yourself these questions:

  • Are these tests guiding me toward defects?
  • Am I finding problems with the predefined test cases? Or do I have to do more exploration before even getting close to a problem?
  • Are these tests exercising unique flows or use paths?

Metrics

You can also look at your defect lists and find related test cases for the defect logged (if you have that ability in your defect management system). This interlinking helps the team understand what test cases led to the issues found.

You can then further analyze whether that test case was created during test design or later added to the list when the issue was found.

Exploration

If your test cases are not effective, you will not find any useful bugs in test execution. That will mean most of your time is spent in unplanned exploration or ad hoc testing. So, by looking at the time spent in actual test execution versus the time spent on ad hoc testing, you can figure out the effectiveness of the test cases you designed.

If your test cases are effective, you will find issues, explore more use paths, navigate through different integrations with other features, and test different aspects of the same functionality.

If at the end of your test execution, you feel that you have not done all of that, you can infer that is because your test cases might be too simplistic or obvious, and therefore not effective enough to find any useful bugs.

History

Continue Reading here–>

Raise your Exploration Game!

Exploration is an integral part of testing. Exploring the application is a great strategy for learning about how it works, finding new information and flows, and discovering some unique bugs too! 

Many testers perform exploratory testing as a matter of course, and agile teams may make it an integral part of their tasks. But how can you up your exploration game? Simply going around the application and looking or clicking here and there surely cannot be called creative exploration.

In my article published at Testrail blog, I outline what do you need to do to bring structure to your exploratory tests and get the most useful information out of them?

Image Source- xenonstack.com

Designate time for exploration

As we get into the flow of agile and its fast-moving sprints, we focus on testing tasks for each user story and are constantly thinking of what needs to be done next. But with minimal documentation and limited time to design tests, it is imperative to understand that just executing the written or scripted tests will not be enough to ensure the feature’s quality, correctness, and sanity.

Exploratory testing needs to be counted as a separate task. You can even add it to your user story so that the team accounts for the time spent on it and recognizes the effort.

Testers can use the time to focus on the feature at hand and try out how it works, its integrations with other features, and its behavior in various unique scenarios that may or may not have been thought of while designing the scripted tests. Having exploratory testing as a task also mandates that it be done for each and every feature and gives testers that predefined time to spend on exploration. 

In my testing days, this used to be the most creative and fun aspect of my sprints, and it resulted in great discoveries, questions, insights, and defects!

Read More »

The Partnership of Testing and Checking

Human Testing is a craft that is more than executing a bunch of tests, performing clicks and actions. A tester has a unique understanding of the system and ways to critique it. Over time, the tester develops a deeper comprehension of the application and its intricacies, integrations, weak points, and history. This makes them the best judge to find out the failure points of the system and comment on its health.

The Product Risk Knowledge Gap is the difference between what we know about the product and what we need to know. The purpose of testing is to close or at least reduce this gap.

While automated checks can help in determining problems in what we know (and have scripted as checks), it may not help as much in the risk areas of what we do not know about the product. That requires exploration, creativity, intuition and domain knowledge. This is the human aspect of testing.

The creative and human aspects of testing lie with the tester, which I have experienced as well as written about a few years back as a hands-on tester myself here – https://testwithnishi.com/2014/12/31/automation-test-suites-are-not-god/

Your Name: Review:

Automated Checks-

Automated scripts have some built-in steps in the form of test data that we pre-define and verifications that we add. These steps are helpful for areas of the application that we need to check, double-check or re-check a number of times, and because these types of checks can be made explicit, they can be automated. Since the same steps will be performed the same way over and over again, it is better called “checking” rather than “testing.”

Read More »