A look at tests in Quadrant-2 – Business-Facing tests
On an agile project, the customer team and the development team strike up a conversation based on a user story.
Business-facing tests address business requirements. They express requirements based on examples and use a language and format that both the customer and development teams can understand. Examples form the basis of learning the desired behavior of each feature and we use those examples as the basis of our story tests in Quadrant-2
Business-facing tests are also called “customer-facing”,”story”,”customer” and “acceptance” tests. The term ‘acceptance tests’ should not be confused with ‘user acceptance tests’ from Quadrant-3.
The business-facing tests in Q-2 are written for each story before coding started, because they help the team understand what code to write.
Quadrant-1 activities ensure internal quality, maximize team productivity, and minimize technical debt.
Quadrant-2 tests define and verify external quality and help us know when we are done.
The customer tests to drive coding are generally written in executable format, and automated, so that team members can run the tests as often as they like to see if functionality works as desired.
Tests need to include more than the customer’s stated requirements. We need to test for post-conditions, impact on the system as a whole, and integration with other systems. We identify risks and mitigate those with our tests. All of these factors then guide our coding.
The tests need to be written in a way that is comprehensible to a business user yet still executable by the technical team.
Getting requirements right is an area where team members in many different roles can jump in to help.
We often forget about non-functional requirements. Testing for them may be a part of Quadrants 3 and 4, but we still need to write tests to make sure they get done.
There are conditions of satisfaction for the whole team as well as for each feature or story. They generally come out of conversations with the customer about high-level acceptance criteria for each story. They also help identify risky assumptions and increases team’s confidence in writing & correctly estimating tasks needed to complete the story.
A smart incremental approach to writing customer tests that guide development is to start with a “thing-slice” that follows a happy path from one end to the other. (also called a “steel-thread” or “tracer-bullet”). This ‘steel-thread’ connects all of the components together and after it’s solid, more functionality can be added.
After the thin slice is working, we can write customer tests for the next chunk.
It’s a process of “write tests — write code— run tests — learn”
Another goal of customer tests is to identify high-risk areas and make sure code is written to solidify those.
Experiment & find ways your team can balance using up-front detail and keeping focused on the big picture.
Quadrant-2 contains a lot of different types of tests and activities. We need the right tools to facilitate gathering, discussing, and communicating examples and tests.
>>Simple tools such as Paper or Whiteboard work well for gathering examples if the team is co-located.
>>More sophisticated tools help teams write business-facing tests that guide development in an executable, automatable format.
A look at tests in Quadrant-1 – Technology Facing tests
Unit tests and component tests ensure quality by helping the programmers understand exactly what the code needs to do and providing guidance in the right design
The term ‘Test-Driven Development’ misleads practitioners who do not understand that its more about design than testing. Code developed test-first is naturally designed for Testability.
When teams practice TDD, they minimize the number of bugs that must be caught later.
The more bugs that leak out of our coding process, the slower our delivery will be, and in the end, it is the quality that will suffer. That’s why programmer tests in Quadrant-1 are so critical. A team without these core agile practices is unlikely to benefit much from agile values and principles.
Source Code Control, Configuration Management and Continuous Integration are essential to getting value from programmer tests that guide development.
CI saves time and motivates each programmer to run the tests before checking in the new code.
An advantage of driving development with tests is that code is written with the express intention of making tests pass.
A common approach in designing a testable architecture is to separate the different layers that perform different functions in the application.
Teams should take time to consider how they can take time to create an architecture that will make automated tests easier to create, inexpensive to maintain and long-lived. Don’t be afraid to revisit the architecture is automated tests don’t return value for the investment in them.
“The biggest value of unit tests is in the speed of their feedback.”
Each unit test is different and tests one dimension at a time
Learning to write Quadrant-1 tests is hard.
Because TDD is really more of a design activity, it is essential that the person writing the code also writes the tests, before writing the code.
If a delivery date is in jeopardy, push to reduce the scope, not the quality.
Give the team time to learn and provide expert, hands-on training.
Technology-facing tests cannot be done without the right tools and infrastructure
The Agile Testing Quadrants matrix helps testers ensure that they have considered all of the different types of tests that are needed in order to deliver value.
Unit tests verify functionality of a small subset of the system. Component tests verify the behaviour of a larger part such as a group of classes that provide some services. Unit & Component tests are automated and written in the same programming language as the application. They enable programmers to measure what Kent Beck has called the internal quality of their code.
Tests in Quadrant-2 support the work of the development team but at a higher level. These business-facing tests define external quality and the features that the customers want. They’re written in a way business experts can easily understand using the business domain language.
The quick feedback provided by Quadrant 1 and 2 automated tests, which run with every code change or addition, form the foundation of an agile team. These tests first guide the development of functionality and when automated, then provide a safety net to prevent refactoring and the introduction of new code from causing unexpected results.
“Appraising a software product involves both art and science.”
Quadrant-3 classifies the business-facing tests that exercise the working software to see if it doesn’t quite meet expectations or won’t stand up to the competition. They try to emulate the way a real user would work the application. This is manual testing that only a human can do…use our senses, our brains and our intuition to check whether the development team has delivered the business value required by the customer.
Exploratory testing is central to this quadrant.
Technology-facing tests in Quadrant-4 are intended to critique product characteristics such as performance, robustness and security.
Creating and running these tests might require the use of specialised tools and additional expertise.
Automation is mandatory for some efforts such as load and performance testing.
Ward Cunningham coined the term “technical debt” in 1992, but we’ve certainly experienced it throughout our careers in software development.
By taking the time and applying resources and practices to keep technical debt to a minimum, a team will have time and resources to cover the testing needed to ensure a quality product. Applying agile principles to do a good job of each type of testing at each level will, in turn, minimize technical debt.
Each quadrant in the agile testing matrix plays a role in keeping technical debt to a manageable level.
The Agile Testing Quadrants provide a checklist to make sure you have covered all your testing bases. Examine the answers to questions such as:
Are we using unit & component tests to help find the right design for our application?
Do we have an automated build process?
Do our business-facing tests help us deliver a product that matches customer expectations?
Are we capturing the right examples of desired system behaviour?
Do we show prototypes of the UIs and reports to the users before we start coding them?
Do we budget enough time for exploratory testing?
Do we consider technological requirements like performance and security early enough?