What can you learn from the defects you found?

The bugs we find during testing can tell us a lot about the application, the state of its quality and its release-readiness. Bugs can also provide insights into our development processes and practices — and lapses therein.

How can we study bugs to improve the overall state of our project? In my article published @Gurock TestRail blog, I have described three things to learn from the bugs you find. https://blog.gurock.com/three-learn-bugs/

 The location of defect clusters

Defect clustering is one of the seven principles of software testing, and keeping an eye out for these clusters is the responsibility of a good tester.

As we log defects into a tracking tool or portal, teams generally follow the practice of measuring relevant modules, components or functional areas against each defect. When tracked over time, this information can be real gold! It helps us track which areas of the application are having more bugs.

We can plot these area metrics against the number of defects raised and find the defect rates over time. We can also create filters to raise concerns whenever the defect rates go over a certain limit in any specific area or component. This can help us combat defect clustering by doing a fresh analysis, revisiting the tests being performed and focusing more of our exploratory test efforts in those areas.

Overall, knowing about these defect clusters, keeping an eye out for them and regularly revisiting the areas will help us keep the quality of the entire system in check.

Frequency of defects (and their resolution)

The frequency of defects being found and logged tells us a lot about the maturity of the product.

In the beginning of construction sprints, defects are supposed to be frequent and plentiful. We may not go by numbers here, but the relativity of them. As we progress toward a release, the number of defects generally declines, indicating that the system is now more mature and sturdier after withstanding multiple test cycles. Some teams even use the metric of mean time between failures as an exit criterion for testing, indicating that they will only finish testing once they cannot find any new defect for a certain number of days.

As defects are raised, triaged, resolved and verified, there is a typical turnaround time that we expect. Most defects will go through this lifecycle within a reasonable stipulated time or will be postponed with a reason or business decision. Some defects may linger in the system for longer.

There may be a variety of reasons for these decisions:

  • A defect requires more information, and the developer is awaiting confirmation or details from the tester who raised it
  • The defect was misunderstood and there are comments going back and forth between the tester and developer about the expected behavior
  • The assigned developer was on vacation for a week and the defects have not been fixed, leading to a plateau in the defect-fix-rate graph
  • Defects are awaiting triage by the product owner and do not have priorities or the correct people assigned to them

Whatever the reason, knowing the cause of defects remaining open, in progress or unresolved for longer than a stipulated time is important. We may have to fix people issues or communication gaps, or may just need to schedule a short triage or discussion with the team to decide on the fate of such issues. But understanding any delays gives us a much-needed insight into team dynamics and helps us smooth out the process.

The reasons behind rejected defects

The number and type of defects getting rejected — and the reasons behind the rejections — can also tell us a lot about the state of the product and psychology of the team. If you see a high number of irreproducible defects, it may mean that some data or information is getting lost when reporting, or that the testers do not have enough time or perspective to reproduce the defects.

A high number of duplicate bugs may show that testers are unaware of the system’s history, or maybe they are new to the team and need to get a little more background. It may also be a case of the same bugs reoccurring, which might have been fixed and closed in previous releases.

Incorrect defects marked with “Not a bug” or “Working as designed” tell us about a lack of understanding of the system on the testers’ side. Or it may be due to a lack of communication among the team members, leading to different perceptions about the features that were designed or implemented.

Our findings from these types of defects can help test managers or project owners plan measures like internal trainings and knowledge sharing, which can enhance communication among team members and introduce prerequisites to fulfill before logging any issues.

There is a world of information that your defects can provide. If you take a good look at your bugs and talk about them as a team, you can find ways to use that information to your advantage.

Read more–> Click here for the full article

Happy Testing!

What the NAPLAN Fail Tells Us About Testing in Education?

Implications of Software Testing in the field of Education

The National Assessment Program – Literacy and Numeracy (NAPLAN) are school tests administered to Australian students. This August, the online program was offered to 1.5 million students. Students failed to log on. 

Had the software undergone functional testing, the program could have launched successfully. A functional testing company verifies every function of the software function as per requirements. It is a black box type of testing where the internal structure of the product is not known to the tester.

Functional and Performance Issues – Naplan’s problem has been ongoing. In March, it took students 80 minutes to get to online tests. The requirement was of 5 minutes. The software performed shockingly different from what was planned. 30,000 students had to retake tests, which too were marred by technical glitches. The test data was not automatically saved. The data recovery time was 15 minutes compared to the requirement of zero minutes. Once again, the software did not perform as expected. Eventually, the problem was resolved, however, it came at the expense of dropouts and time lags. 

Accessibility Issues – Naplan software had other errors that a functional testing company could have taken care of. The features that were designed for students with disabilities were not functional. Alternate text for students was missing, incorrect and inaccessible for students with auditory disabilities. The color contrast was poor. The color contrast was of immense importance to those who required accessibility help with seeing visuals. 

In the Naplan case, a functional testing company would prepare several test cases to verify the functionality of the login page, accessibility features, load times and data recovery times against the requirements specified. Functional testing would cover unit testing, integration testing, interface testing, and regression testing. In addition to manual testing, a functional testing company would perform automation testing. Software testing tools automate tests to improve the accuracy and speed of execution.

Naplan’s online system was reviewed by PricewaterhouseCoopers to reflect these problems. The report nails down the cause of the issues to a lack of automation testing, “[Education Services Australia] continues to work with [Education technology provider] Janison and Microsoft to improve upon the current recovery time of 80 minutes and recovery point of 15 minutes, and believe that eventually an automated service may become possible, however, the current environment is unable to do so.” 

What do we learn?

Naplan’s fail tells us that software testing in the field of education is as important as in other industries like healthcare and banking. Modern school systems rely heavily on online resources, not simply for research but also for exam and course work. As schools shift from traditional paper-based teaching methods to electronic systems, they must remember to test their software in sync with technological demands.

Without robust testing, school systems can be severely impacted.

  • The administrative costs would go up in rescheduling exams for students.
  • The school would also lose credibility as students will mock the sluggish approach.
  • Students who are dedicated to working will become demotivated. ‌
  • Positive school culture is likely to dwindle.

Before that happens, educational institutions must think of software testing!

This is a guest post by Ray Parker

Author Bio:

Ray Parker is a senior marketing consultant with a knack for writing about the latest news in tech, quality assurance, software development and travel. With a decade of experience working in the tech industry, Ray now dabbles out of his New York office.

Meeting James Bach at The Test Tribe Meetup @Bangalore

I got the opportunity to meet and listen to the test expert we all look up to – Mr. @James Bach at @The Test Tribe Community meetup organized at Bangalore on 23 June 2019.

It was a great talk by him on the topic ‘Testing vs Checking’ where he discussed the finer nuances of the testing craft and how automated checks are more explicit and fixed than the human brain and the thought process of a real tester.

Apart from the great content, I also observed and loved the presentation style, the ingenuity, the spontaneity, and the interspersed humor! His true passion for testing and the sheer amount of experience shines through each spoken word. We learn a lot just from being in the same room with such experts.

I tried my hands at #sketchnotes for the first time, trying to capture the gist of his talk.

Here is a glimpse into the event-

It sure was an awesome experience and a day well spent! I look forward to meeting him again and getting an opportunity to learn from him!

Cheers!

Look Back to Plan Forward – Learnings from 2018

Every year we see the software industry evolving at a rapid pace. This implies changes in the way testing is conducted within the software lifecycle, test processes, techniques and tools, and the tester’s skill set, too.

I’ve been into agile for more than a decade, and I’m still learning, changing and growing each year along with our industry. Here are five of my key lessons and observations from 2018. I hope they help you in the coming year!

https://blog.gurock.com/lessons-for-agile-testers/

In my article published on Gurock blog, I talk about the 5 key learnings for Agile testers from the past year and how they will be key in planning your road ahead in 2019. The key learning areas discussed are —

Testing Earlier in DevOps

Getting Outside the Box

Increasing Focus on Usability Testing

Enhancing Mobile and Performance Testing

Integrating Tools and Analyzing Metrics

Click Here to read the complete article — >

The 12 Agile Principles: What We Hear vs. What They Actually Mean

The Agile Manifesto gives us 12 principles to abide by in order to implement agility in our processes. These principles are the golden rules to refer to when we’re looking for the right agile mindset. But are we getting the right meaning out of them?

In my latest article for Gurock TestRail blog, I examine what we mistakenly hear when we’re told the 12 principles, what pain points the agile team face due to these misunderstandings, and what each principle truly means.

 

Principle 1: Our Highest Priority is to Satisfy the Customer Through Early and Continuous Delivery of Valuable Software

What we hear: Let’s have frequent releases to show the customer our agility, and if they don’t like the product, we can redo it.

The team’s pain points: Planning frequent releases that aren’t thought out well increases repetitive testing, reduces quality and gives more chances for defect leakage.

What it really means: Agile requires us to focus on quick and continuous delivery of useful software to customers in order to accelerate their time to market.

Principle 2:

Check out the complete post here —- Click Here to Read more–>

 

Do share your stories and understanding of the 12 Agile Principles!

Cheers

Nishi

Innovation Games – Part 4 – 20 / 20 Vision

Hello Readers!

Here we have for you another Innovation Game centered around prioritization. It is a visual and crisp way to chart out and understand priorities of upcoming tasks , features or stories.

Game :   20/20 Vision

Aim :       To chart out the RELATIVE priorities of the tasks at hand

Method:  The 20/20 game elaborates the prioritization of each task in relation to a benchmark task of medium priority and complexity.

Just like a visit to the Optometrist, where he makes you compare the various lenses to find the best suitable for your sight, in this game we make the team compare all stories / requirements / tasks and find the right place for them on the chart of priority in relation to the one benchmark level.

Description: Write down all stories on post-its. With the team’s consensus and decision, decide on one story which is of medium level and put in on the board in the middle.

Now the team goes through each story one by one, and places the story on the board as higher or lower priority in relation to the benchmark story. At the end of the exercise, we arrive at a visual representation of the story or task prioritization, giving us a clear road-map for future!

20-20

This game takes almost 15-30 minutes only depending on the number of tasks at hand, as compared to long planning meetings.

Give it a try, it is fun ! 🙂

Cheers,

Nishi

Innovation Games – Part 3 – Prune The Product Tree

Hello There!

Hope you are enjoying our series on Innovation Games, and learning some new techniques to engage your Agile team. In Part 1 and  Part 2 we discussed some really fun and interesting Agile Innovation games.

In this part we shall discuss a really unique Innovation Game which helps the team and stakeholders to gather a broader perspective on the product or project they are working on. While working on small components and intricacies of the project, it is possible for us to loose perspective and be confined in a narrow zone.Our game helps us to ‘Zoom-Out’ periodically and get a bird’s eye view of the project, its future and the road-map ahead. It is called –

Prune The Product Tree –

Objective:

  • To identify the most important features, aspects of the product as per the stakeholders and to elicit feedback from the customers.

Method :

  • Draw a big tree on the chart and draw its branches. The thick branches represent the major functionalities of your system. The smaller branches are the functionalities within each branch.
  • Participants place the index cards in their respective branches after writing the new expected features.
  • We may also add apples for functionalities that will be very useful for next releases, and flowers for the good-to-have features that may wow customers!

ProductTree
Prune The Product Tree

Analysis:

This will give an overview of the future direction of the product and gives visual representation on which branch of the product tree is expanding the most.

Try this out with your team , and you shall see the benefits soon! 🙂

Cheers.

Innovation Games – Part 1 – Mitch Lacey Team Prioritization

Hello there!

As promised, I am now beginning the series on learning the most popular Innovation Games, some of which I also featured in my Session at UNICOM World Business Summit.

The first one we take up is “Mitch Lacey Team Prioritization”

Objective:  The objective of this game is to prioritize the items in our ever-increasing backlog, which if not tracked can prove paralyzing for the agile team.

Method: Draw a chart with x axis as Size (small to Large) and y axis as the priority (lower to higher). The graph is divided into three columns for easier segregation.

Start out by handing out post-its having each backlog item listed, and the team places each item in the corresponding area as per their perception and discussion.

This is how your chart must look like:

The Mitch Lacey Chart
The Mitch Lacey Chart

Analysis:

  • Top-left corner of the graph will be the items with high priority and low effort, so automatically be the first items to be picked.
  • Top-right corner, on the other hand, will be items with high priority and high complexity, so will be picked next.
  • By placing the ideas in the 2D space, it gives a clear visual representation of the next logical steps for the team, and also answers the vital question —“What should we do that will generate maximum value with minimum effort and complexity?”

Try it out – its fun and effective! 🙂

Cheers,

Nishi