Read Along- ‘Agile Testing’ Chapter-17

“Iteration Kickoff”

  • Most teams kickoff their new iteration with a planning session. – where they discuss one story at a time, writing & estimating all of the tasks needed to implement it.
  • Task cards need to be written along with development task cards and estimated realistically.
  • When writing programming task cards, make sure that coding task estimates include time for writing unit tests and for all necessary testing by programmers.
  • Testers should help make sure that all necessary cards are written and they have reasonable estimates.

Your job as a tester is to make sure enough time is allocated to testing and to remind the team that testing & quality are the responsibility of the whole team. When the team decides how many stories they can deliver in an iteration, the question isn’t “How much coding can we finish?” but “How much coding and testing can we complete?”

Commit Conservatively – It is always better to bring in another story later than to drop a picked story.

  • Working closely with customers or customer proxies is one of the most important activities as an agile tester. Good communication usually takes work.
  • We want “big-picture” tests to help the programmers get started in the right direction on a story. High level tests should convey the main purpose behind the story.
  • Don’t forget to ask the programmers what they think you might have missed. What are the high-risk areas of the code? Where do they think testing should be focused?

When Testability is an issue, make it the team’s problem to solve.

One beneficial side-effect of reviewing the tests with programmers is the cross-learning that happens.

High level test cases along with executable tests you’ll write during the iteration will form the core of the application’s documentation.

People unfamiliar with agile development often have the misconception that there’s no documentation. In fact, agile projects produce usable documentation that contains executable tests and thus, is always up to date.

Read Along- ‘Agile Testing’ Chapter-15

“An Iteration in the life of a Tester”

  • Testers bring a different viewpoint to planning and estimation meetings. They need to be a part of the story sizing process.
  • The team needs to develop in small, testable chunks in order to help decide what stories are tentatively planned for which iteration. They keyword being ‘testable’.
  • If there are stories that present a big testing challenge, it might be good to do those early on.

Release Planning is the time to start asking for examples and use cases of how the features will be used, and what value they’ll provide. Drawing flowcharts or sample calculations on white board can help pinpoint the core functionality.

  • The agile tester thinks about how each story might affect the system as a whole or other systems that ours has to work with.
  • In agile development, Test Plan must be concise and lightweight., assessing testing issues, including risk analysis and identifying assumptions. The biggest benefit of test planning is the Planning itself.

This chapter shows examples of lightweight agile Test Plans created by Lisa and Janet that are very useful! Here is my take on creating a simplistic agile test plan using a mind-map-

Agile Test Plan – using a Mind Map

The chapter discusses about Task Boards and how they can be leveraged. Here is my take on using task boards by agile teams that I wrote a few months back –

https://testwithnishi.com/2019/07/25/4-ways-task-boards-can-help-agile-teams/

Agile metrics are key to measuring the team’s progress. Plan for what metrics you want to capture for the life of the release, think about what problem you are trying to sove and capture only those metrics that are meaningful for your team.

Here is something I wrote about useful and not-so-useful Agile metrics-

https://testwithnishi.com/2019/12/04/metrics-your-agile-team-should-should-not-be-tracking/

Don’t get caught up with committing to your plans- the situation is bound to change. Instead, prepare for doing the right activities and getting the right resources in time to meet the customer’s priorities!

Read Along- ‘Agile Testing’ Chapter-13

“Why we Want to Automate Tests and What holds us back”

  • Test automation is a core agile practice. Agile projects depend on automation.
  • Manual tests take too long and are error prone.
  • Automation regression tests provide a safety net. They give feedback early & often.
  • Automated builds, deployment, version control and monitoring also go a long way toward mitigating risk and making your development process more consistent.

“Build Once, Deploy to Many” – is a tester’s dream!

Projects succeed when good people are free to do their best work. Automating tests appropriately makes that happen!

If we become complacent about our testing challenges and depend solely on automated tests to find our issues, and then just fix them enough for the test to pass – we do ourselves a disservice.

However, if we use the tests to identify problem areas and fix them the right way or refactor as needed, then we are using the safety net of automation in the right way.

When tests that illustrate examples of desired behavior are Automated, they become ‘living’ documentation of how the system actually works.

Barriers to Automate –

  • Programmer’s attitude – Why automate at all
  • The Hump of Pain – the initial learning curve
  • Initial Investment
  • Fear – Non-programming testers fear they have nothing to contribute
  • Legacy code
  • Old habits, team culture
  • Without automated regression tests, manual regression testing will continue to grow in scope and eventually may simply be ignored.
  • Teams with automated tests and build processes have a more stable velocity.
  • Good test design practices produce simple, well-designed, continually refactored, maintainable tests.
  • Team culture & history may make it harder for programmers to prioritize automation of business-facing tests than coding new features. Using Agile principles & values helps the whole team overcome barriers to test automation.

4 Exit Criteria your User Stories must have

Planning and developing new features at the fast pace of agile is a hard game. Knowing when you are really done and ready to deliver is even harder.

Having predetermined exit criteria helps you be able to make the decision that a feature is truly ready to ship. In my article published at TestRail Blog, I compiled a list of exit criteria you must add to your user story to make it easy to bring conformity and quality to all your features.

All Tasks Are Completed

This first one sounds obvious, but it may not be. I still see many teams struggling with getting their testing done within the sprint. Developers work on a user story and deem it done, while testers are left to play catch-up in the next sprint.

Put that practice to an end once and for all by making sure that no user story can be proclaimed done without having all tasks under it completed, including development tasks, testing tasks, design and review tasks, and any other tasks that were added to the user story at the beginning.

Ensuring all tasks are completed in a sprint also mandates that you begin thinking in depth about each user story and the tasks necessary for each activity to be completed, so that you do not miss out on anything at the end.

Tests Are Automated Whenever Possible

As our agile teams move toward continuous delivery and adopting DevOps, our testing also needs to be automated and made a part of our pipelines. Ensuring that test automation gets done within the sprint and is always up to pace with new features is essential.

By having test automation tasks be a part of a user story delivery, you can keep an eye out for opportunities to automate tests you are creating, allocate time to do that within the sprint, and have visibility of your automation percentages.

I have used the following exit criteria:

  • At a minimum, regression tests for the user story must be added to the automation suite
  • At least 50% of tests created for the user story must be automated
  • Automated regression must be run at least once within the sprint

Depending on what your automation goals are, decide on a meaningful standard to apply to all your user stories.

Read More »

Using a Combination of Scripted, Automated and Exploratory Testing for Optimum QA Coverage

Most test teams today are struggling to find better ways to handle their testing. With the advent of Agile in our software development processes, teams are perennially under pressure to provide faster releases without lowering their standards of quality. This, in turn, adds load on the in-house test teams to ensure finding more and crucial issues and to prevent defect leakage. For this reason, testers look at strategies and practices that can help them achieve their goals and add more value to the product’s quality.

In my opinion as a hands-on agile tester, there is no single silver bullet to quality, but a combination of different types and approaches to testing that can help us get closer to our quality goals. Test teams need to strategize and plan the usage of a combination of scripted tests, automated tests as well as exploratory tests for achieving an optimum coverage and best quality software.

Here is my latest article for PractiTest QA Learning Centre where I discuss the need to have a combination of scripted, automated as well as exploratory tests for an optimum QA coverage–

https://www.practitest.com/qa-learningcenter/thank-you/exploratory-testing-optimum-qa-coverage/ 

Scripted Tests

When we look at the typical test approach, it begins with test scripting and designing tests as per software functionality. These are created using requirement analysis and test design techniques and also using common sense and skills by our skilled testers. These scripted tests form the starting point of testing a new feature, change or addition in the software.

Automated Testing

In addition to running the scripted tests manually, testers also rely on automated tests. These tests are scripted using various test automation tools and test automation, i.e. ability to write these automated test scripts is, thus, a much-wanted skill nowadays for all test professionals. The ability to run some tests using automated scripts helps repeatability and saves a lot of time and effort on part of the test teams. But most importantly, by automating the drudgery away, it saves the tester from repeated manual laborious tests and frees up their time for more creative thinking and exploration around the application.

Exploratory Testing

Exploration of software is basically looking at the feature/functionality/change and overall behavior from a learning as well as a critical standpoint. Exploratory Testing is a crucial aspect of software testing, which almost every tester performs knowingly or subconsciously.

Cem Kaner coined the term Exploratory Testing in his book “Testing Computer Software” and described it as:

“Simultaneous test design, test execution and learning with an emphasis on learning”

https://www.practitest.com/qa-learningcenter/thank-you/exploratory-testing-optimum-qa-coverage/ 

Read More »

Is Excel holding back your testing?

My guest post @PractiTest QA Learning center

As testers, we all worked with Excel at some point in our career. If you are using
excel now this article is for you 🙂 Excel is used as test management, documentation
and reporting tool by many test teams. At early stages, most teams rely on excel
spreadsheets for planning and documenting tests, as well as reporting test
results. As teams grow, sharing information using excel sheets becomes problematic.
What used to be easy and intuitive, becomes very challenging. Encountering
difficult work scenarios like the below, becomes a day-to-day reality:

  • The simple task of figuring out which excel has the test cases you need to run, takes longer and longer.
  • Gathering the status of the testing tasks and your project can only be done by going to each desk one by one and asking them.
  • A tester mistakenly spent 6 hours running wrong tests in the wrong environment because of an incorrect excel sheet which was not the updated copy.
  • Tester’s routinely lose their work or test results because of saving/ overwriting or losing their excel sheets.
  • Most test activities are not being documented or accounted for because writing tests is considered a luxury.

excel--img

If one or more of these scenarios sound familiar to you, you are being held back in
your testing efforts by excel!

In my latest guest post for PractiTest, I have written about how excel can be a roadblock instead of a useful tool for your testing. To read the complete article, click here—->

In here I talk about issues related with use of excel in relation to

  • Visibility within the test team
  • Configuration Management of test items
  • Test Planning and Execution
  • Test Status and Reporting

Please give it a read and share your thoughts!

Cheers!

Nishi

 

A simplified Agile Test Strategy for Cross Environment Testing

Cross environment testing is viewed as a tedious and repetitive task and is generally a challenge to accommodate within an agile life cycle. In my recent guest post for Gurock, I showcased my own experience in an agile release wherein we created a strategy for coverage of a number of test environments to support.

Using simple steps, discussions, base-lining and agreement within the scrum team, we created a scalable interoperability test strategy which was later supplemented with automation and other tools. In this article I have talked about-

  • Testing across OS versions
  • Supporting System versions
  • Localization- multiple language support
  • Planning and Test Strategy creation
  • Additional Ownership by testers

To read more, click here.

Give it a read and share your thoughts-

https://blog.gurock.com/agile-cross-environment-testing/

Print

Thanks

Nishi