Four Things That Can Sabotage a Sprint

Success and failure are a part of any journey. For agile teams, continuous delivery is the expectation, and that may be a hard thing to achieve. As sprints go on and tasks pile up, we may deter from the path.

Whether your team is beginning their agile journey or are already agile pros, you are bound to encounter a failed sprint at some point.

When do you deem a sprint as failed? Why does a sprint fail? What are the possible reasons, and how can you learn from the mistakes to avoid them in the future? In my article published at TestRail blog – I examine four possible reasons for a failed sprint.

Read the complete article at https://blog.gurock.com/four-things-sabotage-sprint/

Bad Estimation

Estimates cannot be completely accurate every time. But when the agile team fails to see the correct depth or complexity of a task or a user story, the estimates may go haywire, leading to a big diversion from planned timelines within the sprint.

Incoherent Definition of Done

To ensure true completeness, we must list coherent and agreed-upon definitions of done for each type of task we undertake within a sprint, be it development, testing, design, review tasks or test automation. This makes it easier to keep track of the quality of work and get every person’s understanding of the expected work on the same page.

Incomplete Stories

More often than not, user stories being developed in the sprint get stuck at some tricky juncture toward the end. Situations may arise where you reached the last day of the sprint but there are still things holding up the team:

  • Development of the story was completed but testing is still underway
  • Developers and testers paired to conduct tests but some critical issues remain in the feature that need fixing
  • Development and testing are completed but the automation script is yet to be created for regression of the feature (and automation was part of the exit criteria for the user story)
  • Code review is pending, although it is already checked in and working fine
  • Tests for the user story were not added to the test management system even though the tester has performed exploratory tests

Due to any of these reasons or a similar situation, the user story will be incomplete at the end of the sprint. At this point, that feature cannot be deemed fit for release and cannot be counted as delivered.

Technical Debt

In a fast-paced agile environment, we cannot shirk off any part of our work or leave it for later. This becomes technical debt that is hard to pay off. The longer we do not pick up the task, the harder it gets to find the time and spend the effort on it while working on ongoing tasks at the same pace… Continue Reading

Speaking at the DevOps & Agile Testing Summit – 8Nov’19, Bangalore

I was invited to speak at the DevOps and Agile testing Summit organised and conducted by 1.21GWs on 8th Nov 2019 at Bangalore. It was a great event which brought together many keen minds as delegates and many inspiring speakers. https://1point21gws.com/devops/bangalore/

My talk was on “The Building Blocks of a Robust Test Automation Strategy”. As we know testing teams are faced with a number of questions, decisions and challenges throughout their test automation journey. But there is no single solution for their varied problems! In this talk I outlined a number of strategies that agile teams can follow– be it their selection of what to automate and how much, what approaches to follow, whom to involve, and when to schedule these tasks so that the releases are of best quality.

I am grateful that my talk was so well received and led to great discussions later with many participants. I enjoyed the day and am always glad to be invited by the 1.21GWs team.

A peek into the event – pictures from my session

@Sahi Pro was also a knowledge partner at the event and delegates also got a peek into Sahi Pro via video and brochure handouts.

Looking forward to many more successful events! 🙂

I am speaking at ‘Targeting Quality 2019’ , Canada

I am super excited to be speaking at this grand event TQ2019 being organised by KWSQA on 23-24 Sep in Canada!

On top of that I get to present not one but 2 talks!! My topics are

“The What, When & How of Test Automation” 45 mins

In this I will talk about preparing robust automation strategies. Agile means pace and agile means change. With frequent time boxed releases and flexible requirements, test automation faces numerous challenges. Haven’t we all asked what to automate and how to go about the daily tasks with the automation cloud looming over our heads. Here we’ll discuss answers to some of these questions and try to outline a number of approaches that agile teams can take in their selection of what to automate, how to go about their automation and whom to involve, and when to schedule these tasks so that the releases are debt free and of best quality.

“Gamify your Agile workplace”    15 mins

In this I’ll present live some innovation games and have audience volunteers engage and play games based on known scenarios. Let’s Play and learn some useful Innovation Games that can help you gamify your agile team and workplace, making the team meetings shorter and communication more fun!

Both these topics are close to my heart and I am looking forward to sharing my thoughts with a wider audience.

I am also excited to meet all the awesome speakers at the event , as well as get to know the fantastic team of organizers behind this event!

Check out the detailed agenda here – https://kwsqa.org/tq2019/schedule/

Follow me at @testwithnishi, @KWSQA and #TQ2019 on twitter for more updates on the event!

Also check out & support other initiatives by KWSQA at https://kwsqa.org/kwalitytalks/

Wish me luck! 🙂

I am speaking at the ‘World Test Engineering Summit’, Bangalore

I am pleased to announce that I will be speaking at the upcoming ‘World Test Engineering Summit’ being organised by 1.21GWs at Bangalore. It sure is an impressive lineup of speakers and I am glad to be a part of it! Check out the details of the event here-

https://1point21gws.com/testingsummit/bangalore/testengineering/

I will be speaking on –

“Layers in Test Automation – Best Practices for Separation and Integration”

About my topic –

Often a testing team consists of a mix of subject matter experts, some manual testers and testers with some automation experience. Writing tests in the language of the business allows all stake holders to participate and derive value out of the automation process. If you are a nervous beginner or an expert at test automation, you need to know and understand the layers of test automation and how to separate the code from the test. Let us discuss the best approaches and practices for creation of a robust automation framework with correct separation as well integration of these layers. We will also see a demo on how to implement this with a case study!

Also, Sahi Pro is partnering with the event and setting up a demo booth at the event! So, we’ll have our team there to showcase the capabilities of the unique tool and answer all questions.

Be sure to stop by the booth to chat and catch a demo!

Looking forward to a wonderful event! 🙂

Four Questions to ask yourself when planning Test Automation

Test automation poses its own challenges different from manual testing. Teams struggle to get the most out of their test automation due to many hurdles along the way.

Good planning can act as a solid foundation for your test automation project and help you fully reap the benefits. Consequently, there are many things to consider and discuss prior to jumping into test automation to ensure you are following the right path.

In my article published at Gurock TestRail Blog, I have discussed four main questions to ask yourself before starting with test automation-

  1. What is your team’s goal for test automation?
  2. What about implementation?
  3. What is your execution strategy?
  4. Who will focus on maintenance?

Read the full article here to find more on each of these questions and how these help to finalize on a test automation strategy which will help lead your team to success!

Please give this article a read and share your thoughts!

Cheers

Nishi

A Day in the Life of an Agile Tester

An agile tester’s work life is intriguing, busy and challenging. A typical day is filled with varied activities like design discussions, test planning, strategizing for upcoming sprints, collaborating with developers on current user stories, peer reviews for teammates, test execution, working with business analysts for requirement analysis and planning automation strategies.

In my article for Gurock TestRail blog, I have explored a typical day in the life of an agile tester and how varied activities and tasks keep her engaged, busy and on her toes all the time!

agile tester.png

Let’s sneak a peek into a day in the life of an agile tester — > You will go through the daily routine of an agile tester and will experience their complicated schedule in real time.

Read full article

https://blog.gurock.com/agile-tester-work-life/

 

Automation Test Suites Are Not God! 

Earlier this year, one of my articles was published at http://www.agileconnection.com , wherin I highlighted the role and use of automation in an agile context and the irreplaceable importance of manual testing.

Here are excerpts from my article – for the complete text , visit 

http://www.agileconnection.com/article/automation-test-suites-are-not-god 

          Automation Test Suites Are Not God!

Working in an agile environment makes it essential to automate system testing to rerun tests in each iteration. But in the nascent stages of some systems, there are changes in the UI, product flow, or design itself in each iteration, making it difficult to maintain the automation scripts. The role of automation in agile context is repetition of regression and redundant tasks, while the actual testing happens at the hands of manual testers. The creativity, skills, experience, and analytical thought process of a human mind cannot be replaced by automated scripts. This belief has to be ingrained in every organization’s culture in order to achieve the best quality.

Talking about software testing today is incomplete without the mention of test automation. Automation has become an important part of testing tasks and is deemed critical to the success of any software development team—and rightly so, with all its benefits like speed, reliability, reducing redundancy, and ensuring complete regression cycles within tight deadlines.

But the common perception of team managers and policy makers is that automation tools are the complete package for testing activities, and they begin expecting the world out of them. A common misconception is that test automation is the “silver bullet” for improving quality, and organizations start to believe that investing once in an automation tool ends all other testing-related tasks and investments. Managers start expecting everything out of their automation suites—100 percent coverage, minimum run times, no maintenance, and quality delivered overnight. It’s basically expecting godlike miracles to happen! Hence, there arises a need to educate and understand the actual purpose of automation and the importance of manual tests in this context.

Working in an agile environment makes it essential to automate system testing due to the bulk of regression tests required in every iteration. But what makes test automation hard within an agile context is its very inherent nature of constant change. Because the system under test changes continuously, the automation scripts have to be changed so often that they actually become a task themselves instead of a benefit.

As tester James Bach wrote, Test Automation Rule #1 is “A good manual test cannot be automated.” According to this thought, it is certainly possible to create a powerful and useful automated test, which will help you know where to look and to use your manual exploration. But the maximum benefit thereafter will come out of using the experience and exploration techniques.

This is based on the fact that humans have the ability to notice, analyze, and observe things that computers cannot. Even for unskilled testers, for amateur minds, or in total absence of any knowledge, requirements, or specifications of the system under test, people can observe and find a lot of things no tool will be able to.

In a true sense, automation is not actually testing; it is merely the repetition of the tasks and tests that have been performed earlier and are only required as a part of regression cycles. Automation is made powerful by the various reports and metrics associated with it.

But the actual testing still happens at the hands of a real tester, who applies his creativity, skills, experience, and analytics to find and report bugs in the system under test. Once his tests pass, they are then converted to automated suites for the next iteration, and so on.

So the basic job of automation suites is to free up the time and resources of the manual testers from the repetitive and redundant tasks so that they are able to concentrate and focus on the new features delivered and find maximum bugs in those areas.

Therefore, it is very important to not get caught up in the various charts, coverage, and metrics of our test suites. Instead we must focus on our projects’ context and requirements and, based on those designs, our automated versus manual tests ratio.

A simple example to illustrate it would be testing a web form with multiple inputs and spreading across multiple pages. An automation script created for it would ideally open the webpage, input the values and then submit them, and maybe check a couple of validations on input fields along the way. So, the process would ideally be

Observe > Compare > Report

The automation should perform the mostly happy path of a user scenario, observe the behavior as per the set expected results, and inform whether the form passes or fails at the end.

On the other hand, if we perform manual tests on the same web form, we should try to enter the inputs in a different order; navigating to and from the pages and observing whether the inputs are retained or not; and looking for usability issues such as difficulty in locating the fields and navigating buttons, the font being too small or not clear in some setting, or form submission taking so long that some performance benchmarking might be required.

Perform > Analyze > Compare (with existing system, specifications, experience, discussions) >

> Inform (and discuss) > Recheck (if needed) >

> Personal Opinion and Suggestions > Final Report.

It shows that though the web form could have been easily tested by the automation test suite and been passed by it, we might miss out on other valuable aspects if we skip the manual and experience-based tests.

Markus Gartner, author of the book ATDD by Example, summed it up nicely when he wrote, “While automated tests focus on codifying knowledge we have today, exploratory testing helps us discover and understand stuff we might need tomorrow.” 

Automation test suites, though essential, should not be thought of as the “silver bullet” of quality. The actual test efforts still lie with the manual tester’s expertise and skills, without which actual quality cannot be ingrained into the system. We must keep a check on the unrealistic expectations for automation tests, because after all, automation suites are not God!