Read Along- ‘Agile Testing’ Chapter-20

“Successful Delivery”

  • It’s not enough to just code, test and say it’s done. Our goal is to deliver value to the business in a timely manner.

It is helpful to have a “Fit and Finish” checklist. Sometimes fit and finish items aren’t ready to be included in the product until close to the end. It may be necessary to rebuild parts of the product to include items such as new artwork, license or legal arrangements, digital signatures for executables, copyright dates, trademarks and logos.

It is helpful to assemble these during the last full development iteration and incorporate then into the product while continuous integration build cycles are running so that extra builds are not needed later.

  • Agile testers can serve as a conduit or facilitator when it comes to physical delivery of the software.
  • Most teams accumulate some technical debt, despite the best intentions, especially if they’re working with legacy code. To maintain velocity, your team may need to plan a refactoring iteration at regular intervals to add tests, upgrade tools and reduce technical debt.
  • Some teams resort to ‘hardening’ iterations, where they spend time only finding and fixing bugs, and they don’t introduce any new functionality. This is a last resort for keeping he application and its infrastructure solid. New teams may need an extra iteration to complete testing tasks, and if so, they budget time for that in the release plan.

I, too, have worked with Hardening Iterations and here is the article I wrote a while back about it https://testwithnishi.com/2018/10/08/optimize-your-hardening-sprint-for-a-quality-advantage/

End Game

It is the time when the team applies the finishing touches to the product.

It is not meant to be a bug-fix cycle, because you shouldn’t have any outstanding bugs by then, but that doesn’t mean you might not have one or two to fix.

  • Use the end game to do some final exploratory testing. Step back and look at the whole system and do some end-to-end scenarios.
  • As a part of the end game, your application should be deployed to staging just like you would deploy it to production.
  • Staging environments can also be used for load and performance testing, mock deploys, fail-over testing, and manual regression tests and exploratory functional testing.
  • Automating data migrations enhances your ability to test them and reduces the chance for human error.
  • Last minute disasters can happen. The team should cut the release scope if the delivery date is fixed and in jeopardy.
  • Work to prevent a “no go” situation with good planning, close collaboration, driving coding with tests, and testing as you code.
  • As a tester, it is important to understand how customers view the product, because it may affect how you test. Alpha and Beta testing may be the only time you get to interact with end users, so take advantage of the chance to learn how well the product meets their needs.

Learn from each release and take actions to make the next one to go more smoothly.

Read Along- ‘Agile Testing’ Chapter-6

“The Purpose of Testing”

  • The Agile Testing Quadrants matrix helps testers ensure that they have considered all of the different types of tests that are needed in order to deliver value.

Quadrant-1

Unit tests verify functionality of a small subset of the system. Component tests verify the behaviour of a larger part such as a group of classes that provide some services. Unit & Component tests are automated and written in the same programming language as the application. They enable programmers to measure what Kent Beck has called the internal quality of their code.

Quadrant-2

  • Tests in Quadrant-2 support the work of the development team but at a higher level. These business-facing tests define external quality and the features that the customers want. They’re written in a way business experts can easily understand using the business domain language.
  • The quick feedback provided by Quadrant 1 and 2 automated tests, which run with every code change or addition, form the foundation of an agile team. These tests first guide the development of functionality and when automated, then provide a safety net to prevent refactoring and the introduction of new code from causing unexpected results.

“Appraising a software product involves both art and science.”

Quadrant-3

  • Quadrant-3 classifies the business-facing tests that exercise the working software to see if it doesn’t quite meet expectations or won’t stand up to the competition. They try to emulate the way a real user would work the application. This is manual testing that only a human can do…use our senses, our brains and our intuition to check whether the development team has delivered the business value required by the customer.
  • Exploratory testing is central to this quadrant.

Quadrant-4

  • Technology-facing tests in Quadrant-4 are intended to critique product characteristics such as performance, robustness and security.
  • Creating and running these tests might require the use of specialised tools and additional expertise.
  • Automation is mandatory for some efforts such as load and performance testing.
Agile Testing Quadrants

Technical Debt

  • Ward Cunningham coined the term “technical debt” in 1992, but we’ve certainly experienced it throughout our careers in software development.
  • By taking the time and applying resources and practices to keep technical debt to a minimum, a team will have time and resources to cover the testing needed to ensure a quality product. Applying agile principles to do a good job of each type of testing at each level will, in turn, minimize technical debt.
  • Each quadrant in the agile testing matrix plays a role in keeping technical debt to a manageable level.

The Agile Testing Quadrants provide a checklist to make sure you have covered all your testing bases. Examine the answers to questions such as:

  • Are we using unit & component tests to help find the right design for our application?
  • Do we have an automated build process?
  • Do our business-facing tests help us deliver a product that matches customer expectations?
  • Are we capturing the right examples of desired system behaviour?
  • Do we show prototypes of the UIs and reports to the users before we start coding them?
  • Do we budget enough time for exploratory testing?
  • Do we consider technological requirements like performance and security early enough?