Metrics your Agile team should & should not be tracking!

Agile teams are constantly running toward goals, requiring constant planning, monitoring, and re-planning. Metrics can help support these efforts by providing useful information about the health and progress of the project.

There are a few common metrics we use in agile teams: sprint burndown charts, release burnup charts, team velocity. They’re common because they communicate practical information, but they’re not the only metrics we can employ.

In my recent articles for TestRail blog, I described 3 Uncommon metrics you can easily create that will be very useful for your agile team. I also wrote about 3 Metrics that are not useful and you must stop using now!

Here are the posts–>

Three Uncommon Metrics Your Agile Team Should Be Tracking

Here I described 3 most useful metrics –

Defect Health

Defect Health Chart

Test Progress

Metric for weekly test progress

Build Failures

Sprint-wise metric for No of Build Failures

Click here to read the complete article —>

Three Metrics Your Agile Team Should Stop Using

Metrics are supposed to help and support an agile team by providing useful information about the health and progress of their project. But not all metrics are always beneficial. Going overboard with them can sometimes cause more harm than good.

In this post I have described three metrics that can impede your agile team instead of motivating you.

  • Defect Counts
  • Hours
  • Lines of Code or Defect Fixes per Developer

Click here to read the complete article–>

Please share your experiences with metrics and how they helped or impeded your progress!

Cheers

Nishi

Overcoming Barriers to Effective Communications in Agile Teams

Communication is the foundation of success for an agile team. Agile teams need to set up effective communication channels and have a culture of constant communication for complete transparency.

However, there are often several challenges that act as barriers to productive communication and may lead to people problems as well as delayed or failed projects. In my article for TestRail https://blog.gurock.com/agile-barrier-communication/ , I have discussed some of the most common barriers to effective communication for agile teams, as well as how you can overcome them.

  • Physical Barriers
  • Cultural and Language Barriers
  • Emotional Barriers
  • Perceptual Barriers

Read the complete article here ->

Agile teams require constant communication, so it immensely benefits the team to recognize their barriers to effective communication and take some measures to overcome these barriers. Every step taken in this regard leads the team farther down their path to true agility.

Speaking at the DevOps & Agile Testing Summit – 8Nov’19, Bangalore

I was invited to speak at the DevOps and Agile testing Summit organised and conducted by 1.21GWs on 8th Nov 2019 at Bangalore. It was a great event which brought together many keen minds as delegates and many inspiring speakers. https://1point21gws.com/devops/bangalore/

My talk was on “The Building Blocks of a Robust Test Automation Strategy”. As we know testing teams are faced with a number of questions, decisions and challenges throughout their test automation journey. But there is no single solution for their varied problems! In this talk I outlined a number of strategies that agile teams can follow– be it their selection of what to automate and how much, what approaches to follow, whom to involve, and when to schedule these tasks so that the releases are of best quality.

I am grateful that my talk was so well received and led to great discussions later with many participants. I enjoyed the day and am always glad to be invited by the 1.21GWs team.

A peek into the event – pictures from my session

@Sahi Pro was also a knowledge partner at the event and delegates also got a peek into Sahi Pro via video and brochure handouts.

Looking forward to many more successful events! 🙂

The Agile Mindset: Cultural Changes for Successful Transformation

Agile transformations can be a challenging undertaking, and many organizations struggle with what is probably the hardest part of the transition: adopting an agile mindset. It is imperative that teams embrace the agile culture before they can fully embrace agile.

Let’s discuss the major cultural shifts needed for a successful agile transformation. Full article-> https://blog.gurock.com/agile-mindset/

Collaborating to Make Decisions

As I always like to say, agile is more a mindset than a process. It guides you to a better way of working and collaborating in order to deliver the most value to your users. But how you choose to implement those guidelines is up to you, and most teams coming from a traditional style of software development find this aspect the most challenging.

Teams are left to find ways to work together rather than having a process forcing them to do certain actions, follow certain processes, or organize specific meetings. There are no templates or techniques to adhere to and no rules to follow strictly.

This may come as a surprise and leave teams guessing since they are used to being told what to do and how. Agile drives them to think on their feet as they plan and replan their way through the development process. Read More–>

Being Comfortable with Visibility & Exposure

Agile gives everyone a voice and values every person’s opinion. Many teams have been used to only the manager speaking for them or having one representative in most meetings. As a result, some team members may feel flustered now that they’ll occasionally be in the spotlight. People who are not used to voicing their opinion are expected to speak in all forums. Hiding behind the team is no longer an option in agile.

This also means team members are valued as individuals and everyone’s contribution is recognized. Agile treats all team members as equals, whatever their role or designation. They are expected to estimate their own tasks, pick things to work on, collaborate with other team members, and provide value by the end of each iteration. Continue Reading–>

Increasing Communication and Collaboration

Communication is a big factor in agile teams. Developers and testers are always expected to be co-owners of their features and user stories, so they need to collaborate constantly. Business analysts and product owners also need to collaborate with the team to ascertain requirements, answer questions and get clarifications.

Single-scheduled points or meetings during the day are no longer enough. Teams need to learn to collaborate rather than handing off work from one person to the next. The tester-developer relationship sees a new dynamic of working toward the same goal rather than against each other. This may be the toughest of all cultural shifts, so it needs proper grooming from the managers and product owner.

We can no longer rely on metrics like the number of defects logged to find which tester performed the best, or defects logged against a feature to find developers’ efficiency. These are not useful measurements for agile teams and are not good for promoting collaboration.

Managers must encourage team spirit. Instead of pitting developers and testers against each other, managers should promote collective ownership of a user story by a developer and a tester. Continue Reading–>

Embrace the Agile Mindset

Ceremonies and meetings can be organized and repeated easily, but the culture and mindset that are needed to succeed in your agile transformation journey do not come in a single day. Time and patience will be required to resolve people issues, answer questions and doubts, and schedule multiple types of training and team activities to get everyone on board. But these small steps can go a long way toward making teams understand and embody the spirit of agile.

Please read, comment and like my article at TestRail blog https://blog.gurock.com/agile-mindset/

3 ways Agile testers can use Walkthroughs

A walkthrough is a great review technique that can be used for sharing knowledge, gathering feedback and building a consensus. Typically, walkthroughs take place when the author of a work item explains it in a brief review meeting — effectively walking the team through the work — and gathers people’s feedback and ideas about it. These meetings may be as formal or as informal as needed and may also be used as a knowledge-sharing session with many relevant stakeholders at once.

In my article published at https://blog.gurock.com/tester-agile-walkthrough/ , I have discussed three ways agile testers can make use of this type of review for their sprint- and release-level test plans and test cases to get the entire team involved in the quest for quality.

I have also discussed how I have used walkthroughs in my agile team as a mechanism to review our sprint test scenarios with the entire Scrum team. The main areas of application are-

  • Defining Scope
  • Generating Test Ideas
  • Building a Consensus

Click here to read the complete article–>

Walkthroughs are a quick and easy review technique to adopt, and they can be especially useful for testers on an agile team to get reviews on their test plans, test cases, and scripts. Give this technique a try, even if in an informal sense, and see how beneficial it can be!

Components of a Defect Management Software

 Since software developers and testers work together in the Agile and DevOps environments, it gets challenging to cope up with the increasing competition. Development teams work in collaboration with various stakeholders to make the most of the testing efforts. Defects in software applications are a norm, the sooner you realize that better it is. It is impossible to have a 100% defect/error-free software application, but experts work to make the most of their efforts. The current need for faster delivery and quality products calls for robust software testing solutions that can meet customer expectations.

A defect management system is a defect repository where all the defects appearing in a system are identified, recorded and assigned for rectification. This system includes defect management software and defect management tools to achieve projects efficiently. 

How Does Defect Management Work?

A defect management system works in a systematic manner, and records all the defects in the system without duplicating defects, and maintaining a log for future use too. There are different steps involved in the defect management that are explained below”

Identification – First of all, testers identify the defects

Categorize – When it is reported, it is automatically assigned to a team member to assess whether to rectify the defect or leave it

Prioritize – The next step is to prioritize by assessing the severity of defect and its impact on the user. The prioritized defects are handled by a formal change control board. 

Assigning Defects – After   the defects, they are then assigned to developers or testers accordingly 

Resolving Defects – The develope resolves the defect and follows the process to move further in the defect management process

Verification – The software testing team verifies the environment in which the defect was found

Closure – Close the defects after completion

Reporting – Reports are then provided to the relevant stakeholders regularly. They can also be demanded according to requirements. 

The entire process sounds pretty simple and easy, although it is not so. Defect management is tricky but using the right type of defect management software and tools helps in achieving the desired results. These tools are also integrated with other software testing tools to enhance and align all the processes between testers, developers and other relevant stakeholders. It aids all experts and specialists in planning, building and developing a quality software product. Business analysts, product managers, developers, project managers, testers. etc. is operating in a single system by sharing the same information. Defect management tools are also integrated with the test management system, which helps testing teams to share the project status with other key stakeholders. 

All these stages of defect management process contribute to a defect management software. Generally, defects are always considered negative, which may not be the case in all instances. Sometimes a tester makes a mistake while recording the defect, which can affect the entire process. So we can’t truly say that a defect management software or tool alone is responsible for effective defect management, it is also important to handle the defects efficiently. 


This is a guest post by Ray Parker

Author Bio:Ray Parker is a senior marketing consultant with a knack for writing about the latest news in tech, quality assurance, software development and testing. With a decade of experience working in the tech industry, Ray now dabbles out of his New York office.

My experience speaking at Targeting Quality 2019, Canada

I am back from the trip to Canada which followed the big day that was #TQ2019. So, I finally have a chance to share my experiences. This event https://kwsqa.org/tq2019/schedule/ organised by KWSQA was special in a number of ways-

  1. It was my first international conference talk 🙂
  2. I was one of the few international speakers at the conference, and the one who traveled the farthest for it!
  3. I was the only speaker presenting 2 talks!

The travel was big too – with tonnes of visa processing, a 24 hour long flight to Toronto and then a bus ride from Toronto to Cambridge (which I nearly missed 😛 owing to the infamous Toronto traffic! )

Day 1 of the event was workshops that were in progress when we reached and we got a chance to informally meet the organizers at the desk. That evening they had planned a Speaker dinner which was a great idea. I got to interact and meet with all the speakers, made some friends and so the next day seemed a little less daunting having so many known faces.

24 Sep was the big conference day. Staying at the same hotel gave me the advantage to get ready at my own pace and be on time for the breakfast. The event began with a brief intro and then split into tracks. The first talk I attended was ‘Lean Coffee Facilitators Training’ by Matt Heusser. My first time hearing him speak. His session was fun and engaging and practical. I did #sketchnotes for the talk and also participated in the activity which was fun!

After that was my own session in the next room, so I hurried to setup and get ready. The best part was that the organisers had planned a 15 minutes gap between each talk for QA/Networking which gave the speakers and the delegates some breathing room and time to get to other sessions.

I talked on ‘The What, When and How of Test Automation’ which was a 45 minutes session. The room was full and there were lots of good questions and participation from the audience. I did feel that I handled it well and the topic as well as the proposed ideas were well received! 🙂 Here are a few glimpses into my talk-

Though I was relieved having just delivered a good talk, I still had one more to go! After that was lunch hour. A few participants from my talk invited me to sit at their table and we had so many discussions about work, testing as well as my travel plans 😛

Then we got back to talks- I also attended a talk on ‘Barriers in Accessibility Testing’ by Albert Gareev which I also #sketchnoted

Post that I rushed to the lightning talks track as I had to prepare for my next talk that was a 15 minute session on ‘Gamify your Agile Workplace’. As I got there I heard Richard Strang talk about ‘Implementing an Agile QA Guild’ and his experiences that were so varied and interesting. Then I got up to speak and since I was talking about an innovation game called speed boat, I had to first draw a big speed boat on the flipchart (with my limited drawing skills:P ) with a room full of people staring! I guess I managed well as the room MC Tina Fletcher (also president of KWSQA) was impressed with my masterpiece 😛 hehe

The session went well – the best bit being our Keynote speaker Damian Synadinos attending as well volunteering for the little game we played. It was an honor and an unforgettable experience. I hope the audience took back something tangible to try out gamification in their agile teams.

With both the talks done, it was now time to relax and network. I stopped by the booths by Oracle and NPM, chatted with fellow speakers and delegates, the organizers and also got real time feedback from the attendees who chose to attend my sessions.

Post the little coffee break was the grand closing keynote by Damian and it really was an experience. He mentioned in his intro that he had some improv experience and he really uses it to the best in his speaking! The talk was funny, intriguing, had loads of content, memorable quotes as well as an activity in which I volunteered! And a big Plus — Damian mentioned me and my talk too! 🙂 🙂 All in all it was an epic performance and really inspiring as a speaker. Kudos to the effort that went behind putting this together.

The best parts were getting to know so many wonderful people like Josh, Bailey and Dani, and getting to meet @Matt Heuser who I have had the chance to work with online. A face-to-face interaction makes things seem so real and people so approachable. He is a gem of a person and so encouraging too. I also made a friend @Emna who came from Tunisia to speak at the event! We roamed the streets of Cambridge and rode buses together and by the end seemed like we have known each other for so long. I surely hope to see her again at a future conference.

The organizers at TQ2019 had really worked hard and their efforts worked out so well with such a grand event pulled off with great ease, smooth flow and right on schedule. They welcomed us with warmth and helped throughout the day. At the end of the day we all got some time to cool off with a Social event where we mingled and got a chance to express our gratitude and say good byes. I would like to personally thank Greame Harvey, Sabina, Rob, Josh Assad , Jared and Tina Fletcher from the KWSQA committee who were all so helpful and kind.

I am thankful for getting this opportunity and look forward to staying connected with such awesome people. I am also thankful for my supporting hubby who tagged along so that we could make this into a trip – got a chance to explore Toronto, Montreal and Quebec city and of course the majestic Niagara Falls! 🙂

Cheers to @KWSQA #TQ2019 and many more to come! 🙂

I am speaking at ‘Targeting Quality 2019’ , Canada

I am super excited to be speaking at this grand event TQ2019 being organised by KWSQA on 23-24 Sep in Canada!

On top of that I get to present not one but 2 talks!! My topics are

“The What, When & How of Test Automation” 45 mins

In this I will talk about preparing robust automation strategies. Agile means pace and agile means change. With frequent time boxed releases and flexible requirements, test automation faces numerous challenges. Haven’t we all asked what to automate and how to go about the daily tasks with the automation cloud looming over our heads. Here we’ll discuss answers to some of these questions and try to outline a number of approaches that agile teams can take in their selection of what to automate, how to go about their automation and whom to involve, and when to schedule these tasks so that the releases are debt free and of best quality.

“Gamify your Agile workplace”    15 mins

In this I’ll present live some innovation games and have audience volunteers engage and play games based on known scenarios. Let’s Play and learn some useful Innovation Games that can help you gamify your agile team and workplace, making the team meetings shorter and communication more fun!

Both these topics are close to my heart and I am looking forward to sharing my thoughts with a wider audience.

I am also excited to meet all the awesome speakers at the event , as well as get to know the fantastic team of organizers behind this event!

Check out the detailed agenda here – https://kwsqa.org/tq2019/schedule/

Follow me at @testwithnishi, @KWSQA and #TQ2019 on twitter for more updates on the event!

Also check out & support other initiatives by KWSQA at https://kwsqa.org/kwalitytalks/

Wish me luck! 🙂

What can you learn from the defects you found?

The bugs we find during testing can tell us a lot about the application, the state of its quality and its release-readiness. Bugs can also provide insights into our development processes and practices — and lapses therein.

How can we study bugs to improve the overall state of our project? In my article published @Gurock TestRail blog, I have described three things to learn from the bugs you find. https://blog.gurock.com/three-learn-bugs/

 The location of defect clusters

Defect clustering is one of the seven principles of software testing, and keeping an eye out for these clusters is the responsibility of a good tester.

As we log defects into a tracking tool or portal, teams generally follow the practice of measuring relevant modules, components or functional areas against each defect. When tracked over time, this information can be real gold! It helps us track which areas of the application are having more bugs.

We can plot these area metrics against the number of defects raised and find the defect rates over time. We can also create filters to raise concerns whenever the defect rates go over a certain limit in any specific area or component. This can help us combat defect clustering by doing a fresh analysis, revisiting the tests being performed and focusing more of our exploratory test efforts in those areas.

Overall, knowing about these defect clusters, keeping an eye out for them and regularly revisiting the areas will help us keep the quality of the entire system in check.

Frequency of defects (and their resolution)

The frequency of defects being found and logged tells us a lot about the maturity of the product.

In the beginning of construction sprints, defects are supposed to be frequent and plentiful. We may not go by numbers here, but the relativity of them. As we progress toward a release, the number of defects generally declines, indicating that the system is now more mature and sturdier after withstanding multiple test cycles. Some teams even use the metric of mean time between failures as an exit criterion for testing, indicating that they will only finish testing once they cannot find any new defect for a certain number of days.

As defects are raised, triaged, resolved and verified, there is a typical turnaround time that we expect. Most defects will go through this lifecycle within a reasonable stipulated time or will be postponed with a reason or business decision. Some defects may linger in the system for longer.

There may be a variety of reasons for these decisions:

  • A defect requires more information, and the developer is awaiting confirmation or details from the tester who raised it
  • The defect was misunderstood and there are comments going back and forth between the tester and developer about the expected behavior
  • The assigned developer was on vacation for a week and the defects have not been fixed, leading to a plateau in the defect-fix-rate graph
  • Defects are awaiting triage by the product owner and do not have priorities or the correct people assigned to them

Whatever the reason, knowing the cause of defects remaining open, in progress or unresolved for longer than a stipulated time is important. We may have to fix people issues or communication gaps, or may just need to schedule a short triage or discussion with the team to decide on the fate of such issues. But understanding any delays gives us a much-needed insight into team dynamics and helps us smooth out the process.

The reasons behind rejected defects

The number and type of defects getting rejected — and the reasons behind the rejections — can also tell us a lot about the state of the product and psychology of the team. If you see a high number of irreproducible defects, it may mean that some data or information is getting lost when reporting, or that the testers do not have enough time or perspective to reproduce the defects.

A high number of duplicate bugs may show that testers are unaware of the system’s history, or maybe they are new to the team and need to get a little more background. It may also be a case of the same bugs reoccurring, which might have been fixed and closed in previous releases.

Incorrect defects marked with “Not a bug” or “Working as designed” tell us about a lack of understanding of the system on the testers’ side. Or it may be due to a lack of communication among the team members, leading to different perceptions about the features that were designed or implemented.

Our findings from these types of defects can help test managers or project owners plan measures like internal trainings and knowledge sharing, which can enhance communication among team members and introduce prerequisites to fulfill before logging any issues.

There is a world of information that your defects can provide. If you take a good look at your bugs and talk about them as a team, you can find ways to use that information to your advantage.

Read more–> Click here for the full article

Happy Testing!

What the NAPLAN Fail Tells Us About Testing in Education?

Implications of Software Testing in the field of Education

The National Assessment Program – Literacy and Numeracy (NAPLAN) are school tests administered to Australian students. This August, the online program was offered to 1.5 million students. Students failed to log on. 

Had the software undergone functional testing, the program could have launched successfully. A functional testing company verifies every function of the software function as per requirements. It is a black box type of testing where the internal structure of the product is not known to the tester.

Functional and Performance Issues – Naplan’s problem has been ongoing. In March, it took students 80 minutes to get to online tests. The requirement was of 5 minutes. The software performed shockingly different from what was planned. 30,000 students had to retake tests, which too were marred by technical glitches. The test data was not automatically saved. The data recovery time was 15 minutes compared to the requirement of zero minutes. Once again, the software did not perform as expected. Eventually, the problem was resolved, however, it came at the expense of dropouts and time lags. 

Accessibility Issues – Naplan software had other errors that a functional testing company could have taken care of. The features that were designed for students with disabilities were not functional. Alternate text for students was missing, incorrect and inaccessible for students with auditory disabilities. The color contrast was poor. The color contrast was of immense importance to those who required accessibility help with seeing visuals. 

In the Naplan case, a functional testing company would prepare several test cases to verify the functionality of the login page, accessibility features, load times and data recovery times against the requirements specified. Functional testing would cover unit testing, integration testing, interface testing, and regression testing. In addition to manual testing, a functional testing company would perform automation testing. Software testing tools automate tests to improve the accuracy and speed of execution.

Naplan’s online system was reviewed by PricewaterhouseCoopers to reflect these problems. The report nails down the cause of the issues to a lack of automation testing, “[Education Services Australia] continues to work with [Education technology provider] Janison and Microsoft to improve upon the current recovery time of 80 minutes and recovery point of 15 minutes, and believe that eventually an automated service may become possible, however, the current environment is unable to do so.” 

What do we learn?

Naplan’s fail tells us that software testing in the field of education is as important as in other industries like healthcare and banking. Modern school systems rely heavily on online resources, not simply for research but also for exam and course work. As schools shift from traditional paper-based teaching methods to electronic systems, they must remember to test their software in sync with technological demands.

Without robust testing, school systems can be severely impacted.

  • The administrative costs would go up in rescheduling exams for students.
  • The school would also lose credibility as students will mock the sluggish approach.
  • Students who are dedicated to working will become demotivated. ‌
  • Positive school culture is likely to dwindle.

Before that happens, educational institutions must think of software testing!

This is a guest post by Ray Parker

Author Bio:

Ray Parker is a senior marketing consultant with a knack for writing about the latest news in tech, quality assurance, software development and travel. With a decade of experience working in the tech industry, Ray now dabbles out of his New York office.