Guest Talk @ ETMarlabs meetup for EUROSTAR 2017 #magicoftestinginindia

I was invited to present a guest talk at the meetup organised by ET Marlabs team for EUROSTAR 2017 on 9th Sep 2017- being the first of its kind in India and I gladly obliged! Presented a talk on Agile Manifesto and its learning for keen testers and answering our dilemmas in agile testing. The talk was very well received and brought out some great discussions with the participants. I was accompanied by another guest speaker Mr. Vinay Krishna who spoke about Behavior Driven Development BDD framework using Cucumber which was a very informative session too.

The team at ET Marlabs had also organised some great activities, testing relay game and quiz for the participants which brought out their testing minds and enthusiasm , which was well rewarded too! I would like to thank them for their kind invitation and would encourage them to organise and participate in more such community events!

Have a glimpse here –

http://highonblog.com/teamstar-2017-the-magic-of-testing-meetup-in-india/ 

Paying Off the Technical Debt in Your Agile Projects

Just as you should not take out a financial loan without having a plan to pay it back, you should also have a plan when incurring technical debt. The most important thing is to have transparency—adequate tracking and visibility of the debt. Armed with the knowledge of these pending tasks, the team can devise a strategy for when and how to “pay off” technical debt.

Learn about managing your technical debt and testing debt in agile teams and share your thoughts on my latest article published at www.stickyminds.com and also at www.agileconnection.com

***** Here are some excerpts from the article for my readers***

Technical debt initially referred to code refactoring, but in today’s fast-paced software delivery, it has a growing and changing definition. Anything that the software development team puts off for later—be it smelly code, missing unit tests, or incomplete automated tests—can be technical debt. And just like financial debt, it is a pain to pay off.

Forming a Plan to Pay Off Technical Debt

Let’s say a development team working on a new project started out following a certain programming standard. They even set up an automated tool to run on the code periodically and give reports on the adherence to these standards. But the developers got busy and stopped running this tool after a sprint or two, and when the development manager asked for a report after a couple of months, there were hundreds of errors and warnings, all of which now need to be corrected.

This scenario happens all the time with agile teams focused on providing as much customer value as possible each sprint. The problem then needs to be fixed immediately, because despite having all the functionalities in place, the team doesn’t want to release code that is not up to production standards.

The team is then faced with a few options for how to service the debt:

  • Negotiate with the product owner on the number of user stories planned for the upcoming sprint in order to have some extra time for refactoring the code
  • Dedicate an entire sprint to code refactoring
  • Divide all errors and warnings among the development team and let them handle the task of corrections within the next sprint, along with their regular development tasks, by scheduling extra hours
  • Plan to spread this activity over a number of sprints and have a deadline for this report before the end of the release
  • Estimate the size of refactoring stories and either plan them into upcoming sprints as new user stories or accommodate them as part of existing user stories

Though these are all viable options, the best approach depends on the team, the context, upcoming deadlines, the risk the team is willing to take, the highest priority for functionalities that need to be shipped, and the collaboration with the product owner.

Again, just like when you take out a financial loan, you should plan to pay off technical debt as quickly as possible using the resources you have. It’s a good idea to perform a risk analysis of the situation and reach a consensus with the team about the best approach to take.

Technical Debt in Testing

Technical debt doesn’t occur only in programming. Testing activities are also likely to incur technical debts over time due to a variety of factors, including incomplete testing of user stories, letting regression tests pile up for later sprints, not automating essential tests every sprint, not having complete test cases written or uploaded to test management tools, not cleaning up test environments before the next iterations, and not developing or testing with all test data combinations on the current features.

Sometimes debt may be incurred intentionally for a short term, such as not updating tests with new test data when testing on the last day of the sprint due to a time crunch, but planning to do it within the first couple of days in the next sprint. As long as the team has an agreement, it’s acceptable to defer some technical debt for a short while.

On occasion, debt may be incurred intentionally for a longer term by planning it in advance, such as deciding to postpone any nonfunctional tests, like performance or security-related tests, on the system until a few sprints are out and features are stable enough to carry out the tests. Again, as long as the team agrees with the risk and has a plan to address it, it is fine to defer certain activities.

Testing technical debt can get us out of tight situations when needed, but you still need to ensure that you plan carefully, remain aware of the debt, communicate it openly and frequently, and pay it off as soon as possible. Having a plan to service these debts reduces your burden over time and assures your software maintains its quality.

Debt-Solutions

Prevention Is Better Than Cure

Avoiding having any technical debt is always preferable. As the saying goes, an ounce of prevention is worth a pound of cure.

Every team has to devise its own strategy to prevent technical debt from accumulating, but a universal best practice is to have a definition of “done” in place for all activities, user stories, and tasks, including for completing necessary testing activities. A definition of “done” creates a shared understanding of what it means to be finished so that everybody involved on the project means the same thing when they say it’s done. It becomes an expression of the team’s quality standards, and the team will become more productive as their definition of “done” gets more stringent.

Here’s a good example of criteria for a team’s definiton of “done” for every user story they work on:

  • All acceptance criteria for the user story must be met
  • Unit tests must be written for the new code and maintain a 70 percent coverage
  • Functional tests must be performed, and exploratory tests must be performed by a peer tester other than the story owner
  • No critical or high severity issues remain open
  • All test cases for each user story must be documented and uploaded in the test management portal
  • Each major business scenario associated with the user story must be automated, added to the regression test suite, and maintain a 70 percent functional test coverage

Verifying that the activities completed meet these criteria will ensure that you are delivering features that are truly done, not only in terms of functionality, but in terms of quality as well. Adhering to this definition of “done” will ensure that you do not miss out on essential activities that define the quality of the deliverable, which will help mitigate the accumulation of debt.

Despite best practices and intentions, technical debt often will be inevitable. As long as the team is aware of it, communicates openly about it, and has a plan in place to pay it off as quickly as possible, you can avoid getting in over your head.

*************

Pesticide Paradox in Software Testing

Pests and Bugs sound alike?? They act alike too!! 

Boris Beizer, in his book Software Testing Techniques (1990) coined the term pesticide paradox to describe the phenomenon that the more you test software, the more immune it becomes to your tests.

Just like, if you keep applying the same pesticide, the insects eventually build up resistance and the pesticide no longer works. Software undergoing the same repetitive tests build resistance to them, and they fail to catch more defects after that.

  • Software undergoing the same repetitive tests eventually builds up resistance to them.
  • As you run your tests multiple times, they stop being effective in catching bugs.
  • Moreover, part of the new defects introduced into the system will not be caught by your existing tests and will be released onto the field.

Solution: Refurnish and Revise Test Materials regularly

In order to overcome the pesticide paradox, testers must regularly develop newer tests exercising the various parts of the system and their inter-connections to find additional defects.

Also, testers cannot forever rely on existing test techniques or methods and must be on the look out to continually improve upon existing methods to make testing more effective.

It is suggested to keep revisiting the test cases regularly and revising them. Though agile teams provide little spare time for such activities, but the testing team is bound to keep planning these exercises within the team in order to keep the best performance coming. A few ideas to achieve this:

  • Brainstorming sessions – to think of more ideas around the same component testing
  • Buddy Reviews – New joinees to the team are encouraged to give their fresh perspective to the existing test scenarios for the product, which might get some new cases added.
  • Strike out older tests on functionalities that are changed / removed
  • Build new tests from scratch if a major change is made in a component – to open a fresh perspective

 

UPDATE–

This article has been recommended and used as a reference by HANNES LINDBLOM in his blog at https://konsultbolag1.se/bloggen/veckans-testartips-15-tur-genom-variation