4 Exit Criteria your User Stories must have

Planning and developing new features at the fast pace of agile is a hard game. Knowing when you are really done and ready to deliver is even harder.

Having predetermined exit criteria helps you be able to make the decision that a feature is truly ready to ship. In my article published at TestRail Blog, I compiled a list of exit criteria you must add to your user story to make it easy to bring conformity and quality to all your features.

All Tasks Are Completed

This first one sounds obvious, but it may not be. I still see many teams struggling with getting their testing done within the sprint. Developers work on a user story and deem it done, while testers are left to play catch-up in the next sprint.

Put that practice to an end once and for all by making sure that no user story can be proclaimed done without having all tasks under it completed, including development tasks, testing tasks, design and review tasks, and any other tasks that were added to the user story at the beginning.

Ensuring all tasks are completed in a sprint also mandates that you begin thinking in depth about each user story and the tasks necessary for each activity to be completed, so that you do not miss out on anything at the end.

Tests Are Automated Whenever Possible

As our agile teams move toward continuous delivery and adopting DevOps, our testing also needs to be automated and made a part of our pipelines. Ensuring that test automation gets done within the sprint and is always up to pace with new features is essential.

By having test automation tasks be a part of a user story delivery, you can keep an eye out for opportunities to automate tests you are creating, allocate time to do that within the sprint, and have visibility of your automation percentages.

I have used the following exit criteria:

  • At a minimum, regression tests for the user story must be added to the automation suite
  • At least 50% of tests created for the user story must be automated
  • Automated regression must be run at least once within the sprint

Depending on what your automation goals are, decide on a meaningful standard to apply to all your user stories.

Read More »

Using a Combination of Scripted, Automated and Exploratory Testing for Optimum QA Coverage

Most test teams today are struggling to find better ways to handle their testing. With the advent of Agile in our software development processes, teams are perennially under pressure to provide faster releases without lowering their standards of quality. This, in turn, adds load on the in-house test teams to ensure finding more and crucial issues and to prevent defect leakage. For this reason, testers look at strategies and practices that can help them achieve their goals and add more value to the product’s quality.

In my opinion as a hands-on agile tester, there is no single silver bullet to quality, but a combination of different types and approaches to testing that can help us get closer to our quality goals. Test teams need to strategize and plan the usage of a combination of scripted tests, automated tests as well as exploratory tests for achieving an optimum coverage and best quality software.

Here is my latest article for PractiTest QA Learning Centre where I discuss the need to have a combination of scripted, automated as well as exploratory tests for an optimum QA coverage–

https://www.practitest.com/qa-learningcenter/thank-you/exploratory-testing-optimum-qa-coverage/ 

Scripted Tests

When we look at the typical test approach, it begins with test scripting and designing tests as per software functionality. These are created using requirement analysis and test design techniques and also using common sense and skills by our skilled testers. These scripted tests form the starting point of testing a new feature, change or addition in the software.

Automated Testing

In addition to running the scripted tests manually, testers also rely on automated tests. These tests are scripted using various test automation tools and test automation, i.e. ability to write these automated test scripts is, thus, a much-wanted skill nowadays for all test professionals. The ability to run some tests using automated scripts helps repeatability and saves a lot of time and effort on part of the test teams. But most importantly, by automating the drudgery away, it saves the tester from repeated manual laborious tests and frees up their time for more creative thinking and exploration around the application.

Exploratory Testing

Exploration of software is basically looking at the feature/functionality/change and overall behavior from a learning as well as a critical standpoint. Exploratory Testing is a crucial aspect of software testing, which almost every tester performs knowingly or subconsciously.

Cem Kaner coined the term Exploratory Testing in his book “Testing Computer Software” and described it as:

“Simultaneous test design, test execution and learning with an emphasis on learning”

https://www.practitest.com/qa-learningcenter/thank-you/exploratory-testing-optimum-qa-coverage/ 

Read More »

Is Excel holding back your testing?

My guest post @PractiTest QA Learning center

As testers, we all worked with Excel at some point in our career. If you are using
excel now this article is for you 🙂 Excel is used as test management, documentation
and reporting tool by many test teams. At early stages, most teams rely on excel
spreadsheets for planning and documenting tests, as well as reporting test
results. As teams grow, sharing information using excel sheets becomes problematic.
What used to be easy and intuitive, becomes very challenging. Encountering
difficult work scenarios like the below, becomes a day-to-day reality:

  • The simple task of figuring out which excel has the test cases you need to run, takes longer and longer.
  • Gathering the status of the testing tasks and your project can only be done by going to each desk one by one and asking them.
  • A tester mistakenly spent 6 hours running wrong tests in the wrong environment because of an incorrect excel sheet which was not the updated copy.
  • Tester’s routinely lose their work or test results because of saving/ overwriting or losing their excel sheets.
  • Most test activities are not being documented or accounted for because writing tests is considered a luxury.

excel--img

If one or more of these scenarios sound familiar to you, you are being held back in
your testing efforts by excel!

In my latest guest post for PractiTest, I have written about how excel can be a roadblock instead of a useful tool for your testing. To read the complete article, click here—->

In here I talk about issues related with use of excel in relation to

  • Visibility within the test team
  • Configuration Management of test items
  • Test Planning and Execution
  • Test Status and Reporting

Please give it a read and share your thoughts!

Cheers!

Nishi

 

A simplified Agile Test Strategy for Cross Environment Testing

Cross environment testing is viewed as a tedious and repetitive task and is generally a challenge to accommodate within an agile life cycle. In my recent guest post for Gurock, I showcased my own experience in an agile release wherein we created a strategy for coverage of a number of test environments to support.

Using simple steps, discussions, base-lining and agreement within the scrum team, we created a scalable interoperability test strategy which was later supplemented with automation and other tools. In this article I have talked about-

  • Testing across OS versions
  • Supporting System versions
  • Localization- multiple language support
  • Planning and Test Strategy creation
  • Additional Ownership by testers

To read more, click here.

Give it a read and share your thoughts-

https://blog.gurock.com/agile-cross-environment-testing/

Print

Thanks

Nishi