Making the case for Usability Testing in Agile

My first experience with usability testing was on an agile team where the product we were building was being designed with the help of an in-house usability expert. He helped design the user interface (UI) of the application and conduct usability study on the beta version of the software to determine the ease of use of the application.

Though the experience was limited in terms of the interaction we had with the user representatives and the sessions conducted, the feedback we received opened up lots of new avenues for the tester in me around the learnability, understandability and attractiveness of the application I was testing.

Usability has matured a lot over the years. It’s now an essential software characteristic in today’s web and mobile applications. In my article published at the TestRail blog, I discuss ways of performing Usability tests and developing a mindset for Usability in an agile context.

https://blog.gurock.com/usability-testing-agile-projects/

We also discuss about Usability Study , how to set it up and achieve maximum benefits from it.

To read the complete article — (opens in a new tab)”>Click here –>


‘Just Enough’ documentation in an Agile Project

Agile poses many challenges to the development team, most of them pertaining to time. Teams are perpetually under pressure to deliver working software at a fast pace, leaving minimum time for anything else. When testing on an agile project, learning how to write lean documentation can save precious time. Furthermore writing lean documentation can help rework efforts by focusing only on what’s really necessary.

The Agile Manifesto emphasizes working software over comprehensive documentation, but most agile teams interpret this wrong and treat documentation as something to be avoided, owing to time constraints. The manifesto states a lesser focus on comprehensive documentation, but some documentation is still needed for the project and any related guidelines being followed. Attaining this balance is a challenge.

Documentation is a necessary evil. We may think of it as cumbersome and time-consuming, but the project cannot survive without it. For this reason, we need to find ways to do just enough documentation — no more, no less.

Read about how to focus on important areas like VALUE  , COMMUNICATION and  SUFFICIENCY when documenting in your agile project – in my article published at Gurock TestRail blog –> https://blog.gurock.com/lean-documentation-agile-project/

just enough

Click here to read the full article

For example, in a traditional test design document, we create columns for test case description, test steps, test data, expected results and actual results, along with preconditions and post-conditions for each test case. There may be a very detailed description of test steps, and varying test data may also be repeatedly documented. While this is needed in many contexts, agile testers may not have the time or the need to specify their tests in this much detail.

As an agile tester, I have worked on teams following a much leaner approach to sprint-level tests. We document the tests as high-level scenarios, with a one line description of the test and a column for details like any specific test data or the expected outcome. When executing these tests, the tester may add relevant information for future regression cycles, as well as document test results and any defects.

More examples and scenarios for learning leaner test document creation are included in the full article– Click here to read the full article

 

                 Are you interested in finding the right tool for your Agile processes? Here is a comprehensive assessment and comparison of the best agile tools available! 

https://thedigitalprojectmanager.com/agile-tools/

Prepared by Ben Aston, this list may be a useful guide for finding and selecting the best tool to support your agile journey. Check it out!

 

Happy Testing!

Nishi

Better Software Design Ideas for the Hawaii Emergency Alert System

Continuing the discussion on the Hawaii Missile Alert which made headlines in January 2018 and turned out to be a false alarm and ended up raising panic amongst almost a million people of the state all for nothing, (read here for detailed report) I would like to bring back the focus on implications of poor software design leading to such human errors.

Better software design is aimed at making the software easier to use, fit for its purpose and improving the overall experience of the user. While software design focuses on making all features easily accessible, understandable and usable, it also can be directed at making the user aware of all possibilities and implications before performing their actions. Certain actions, if critical, can and should be made more discrete than the others, may have added security or authorisations and visual hints indicating their critical nature.

Some of the best designers at freelancer.com came together to brainstorm ideas for better software design and to revamp the Hawaii government’s inept designs. They ran a contest amongst themselves to come up with the best designs that could avoid such a fiasco in future.

Sarah Danseglio, from East Meadow, New York, took home the $150 grand prize, while Renan M. of Brazil and Lyza V. of the Philippines scored $100 and $75 for coming in 2nd and 3rd, respectively.

Here is a sneak peek into how they designed the improved system :Read More »

Hawaii False Missile Alarm – was it entirely a Human Error?

Software impacts human lives – let us put more thought into it!

Here is what happened and my take on how software design may have been partly responsible and could be improved >>

Miami Shocked!

Miami state in the US received a massive panic attack on Saturday the 13th of January 2018. More than a million people in Hawaii were led to fear that they were about to be struck by a nuclear missile due to circulation of a message sent out by the state emergency management. The message sent state wide just after 8 a.m. Saturday read: “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.”

DTcOgHyUQAA5R4A

The residents were left in a state of panic. People started scrambling to get to safe places, gathering supplies and even saying their goodbyes. Some took shelter in manholes, some gathered their kids into the most sheltered rooms in their homes like bathrooms or basements, some huddled in their closets and some sent out goodbye messages to their loved ones.

Turned out it was a false alert. Around 40 minutes later, the agency sent out another message saying that it was a false alarm sent out by mistake!

The questions being asked was – how could this happen and why did it take 40 minutes to check and issue an all clear?

 

Why Did This Happen?

Investigations into the incident were revealed and the governor stated that It was a procedure that occurs at the change of shift which they go through to make sure that the system is working, and an employee pushed the wrong button.”

The error occurred when, in the midst of a drill during a shift change at the agency, an employee made the wrong selection from a “drop-down” computer menu, choosing to activate a missile launch warning instead of the option for generating an internal test alert. The employee, believing the correct selection had been made, then went ahead and clicked “yes” when the system’s computer prompt asked whether to proceed.

Analysing the Root Cause

But is the fault only at human level? The software being used for such critical usage also needs to help out to avoid the possibility of such human errors.

After all triggering such a massive state-wide emergency warning should not have been as simple as push of a wrong button by a single person!

Could a better design of the software have prevented this kind of scenario from happening?

As reported, the incorrect selection was made in a dropdown – which lets imagine would look something like this-

Miami State Emergency Sample System
Miami State Emergency Sample System

After the selection was made, the system sent a prompt and the employee, believing the correct selection had been made, then went ahead and clicked “yes”.

So by this information we can assume that the prompt would have been something generic like

Miami State Emergency Sample Prompt
Miami State Emergency Sample Prompt

 

Though it definitely is a human error but isn’t the system also at fault for letting this happen so easily?

Better Design Ideas – More Thought – Improving Your Software

By putting in some extra thought into design of the software we can make it more robust to avoid such incidents.

Here are some things that could have helped design it better –

  1. Do not have the TEST options placed right next to the ACTUAL emergency options!

Have different fields or perhaps different sub menus inside the dropdown as categories.

Segregating the Actions in the dropdown into categories
Segregating the Actions in the dropdown into categories

 

>> Always have the TEST category of warnings higher up in the list

>>Have the Default Selection in the dropdown either as BLANK or as one of the TEST warnings and not the actual ones

>>Having the actual warnings section lower down and separated away from the similarly worded TEST warning would ensure lower chance of wrongful selection of the similar named option from the dropdown

 

  1. The prompt message must be made unique to each scenario and in case of selecting a real warning issue action, the prompt must ask the user to specify the emergency.

New prompt
Unavoidable Prompt with explicit message

 

>>Make the prompt appear critical with use of color and text

>>A critical prompt must catch the user’s attention and not be similar to the other screens and popups of the system, to avoid the possibility of clicking on it in a hurry.

>>Placement of Yes and No buttons on unusual sides (Yes is on the left which is not typical) avoids the click of the button – also used Red and Green to signify the importance situation. Red is the usual code for danger.

  1. Additional level of authorisation must be added to the scenarios of real emergency warnings being issued. So, for the TEST actions, user may proceed and begin the drill but in case they select ACTUAL warning then the steps take it to another level of authorisation where another employee – a peer or a senior- reviews the action and performs the final warning issue.

>>This prevents erroneous actions and also some possibility of hackers or notorious people issuing false warnings just by gaining access via one user.

>>Define your hierarchy of users or approvals for each case of emergency.

 

These ideas may sound basic but all these are components of good Usability of the software, its appropriateness of purpose and setting up basic security in usage of the application.

We are just playing around human psychology, easier understand-ability and attention spans.

Let us endeavour to give a little more ‘thought’ to the system

  • Think about its real world usage,
  • Implications of a wrong action in the system,
  • Add more practicality into the design,
  • Make space for human mistakes,           
  • Help humans make better & informed decisions,    and
  • Explore all possibilities to avoid such errors.

 

Cheers,

Nishi

 

Let the ‘Agile Manifesto’ guide your testing efforts!

Hello readers

My article on the relationship of Agile Manifesto to the efforts and dilemmas of software testing has been published at stickyminds.com

Here are excerpts from the article – Please visit https://www.stickyminds.com/article/let-agile-manifesto-guide-your-software-testing and share your views too!


 

The Agile Manifesto is the basis of the agile process framework for software development. It sums up the thought process of the agile mind-set over the traditional waterfall methodology, and it’s the first thing we learn about when we set out to embrace an agile transition.

The Agile Manifesto applies to all things agile: Different frameworks like Scrum, DAD (Disciplined Agile Delivery), SAFe (Scaled Agile Framework), and Crystal all stem from the same principles.

Although its values are commonly associated with agile development, they apply to all people and teams following the agile mind-set, including testers. Let’s examine the four main values of the Agile Manifesto and find out how they can bring agility to teams’ test efforts.

agile-manifesto

Individuals and Interactions over Processes and Tools

Agile as a development process values the team members and their interactions more than elaborate processes and tools.

This value also applies to testers. Agile testing bases itself in testers’ continuous interaction and communication with the rest of the team throughout the software lifecycle, instead of a one-way flow of information from the developers or business analysts on specific milestones on the project. Agile testers are involved in the requirements, design, and development of the project and have constant interaction with the entire team. They are co-owners of the user stories, and their input helps build quality into the product instead of checking for quality in the end. Tools are used on a necessary basis to help support the cause and the processes.

For example, like most test teams, a team I worked on had a test management system in place, and testers added their test cases to the central repository for each user story. But it was left up to the team when in the sprint they wanted testers to do that. While some teams added and wrote their test scenarios directly on the portal, other teams found it easier to write and consolidate test cases in a shared sheet, get them reviewed, and then add them all to the repository portal all at one go.

While we did have a process and a tool in place to have all test cases in a common repository for each sprint, we relied on the team to decide what the best way for them was to do that. All processes and tools are only used to help make life easier for the agile team, rather than to complicate or over formalize the process.

Working Software over Comprehensive Documentation

With this value, the Agile Manifesto states the importance of having functioning software over exhaustively thorough documents for the project.

Similarly, agile testers embrace the importance of spending more time actually testing the system and finding new ways to exercise it, rather than documenting test cases in a detailed fashion.

Different test teams will use different techniques to achieve a balance between testing and documentation, such as using one-liner scenarios, exploratory testing sessions, risk-based testing, or error checklists instead of test cases to cover testing, while creating and working with “just enough” documentation in the project, be it through requirements, designs, or testing-related documents.

I worked on an agile project for a product where we followed Scrum and worked with user stories. Our approach was to create test scenarios (one-liners with just enough information for execution) based on the specified requirements in the user story. These scenarios were easily understood by all testers, and even by the developers to whom they were sent for review.

Execution of test scenarios was typically done by the same person who wrote them, because we had owners for each user story. Senior testers were free to buddy test or review the user story in order to provide their input for improvements before finalizing the tests and adding them into the common repository.

Customer Collaboration over Contract Negotiation

This is the core value that provides the business outlook for agile. Customer satisfaction supersedes all else. Agile values the customer’s needs and constant communication with them for complete transparency, rather than hiding behind contract clauses, to deliver what is best for them.

Agile testing takes the same value to heart, looking out for the customer’s needs and wishes at all points of delivery. What is delivered in a single user story or in a single sprint to an internal release passes under the scrutiny of a tester acting as the advocate for the customer.

Because there is no detailed document for each requirement, agile testers are bound to question everything based on their perception of what needs to be. They have no contract or document to hide behind if the user is not satisfied at the end of the delivery, so they constantly think with their “user glasses” on.

As an agile tester, when I saw a feature working fine, I would question whether it was placed where a user would find it. Even when the user story had no performance-related criteria, I would debate over whether the page load time of six seconds would be acceptable. After I saw that an application was functionally fine, I still explored and found that the open background task threads were not getting closed, leading to the user’s machine getting hung up after few hours of operation. None of these duties were a part of any specification, but they were all valuable to the user and needed correction.

Responding to Change over Following a Plan

Agile welcomes change, even late in development. The whole purpose of agile is to be flexible and able to incorporate change. So, unlike the traditional software development approaches that are resistant to change, agile has to respond to change and teams should expect to replan their plans.

In turn, such is the case for agile testing. Agile testing faces the burden of continuous regression overload, and topped with frequent changes to requirements, rework may double itself, leading to testing and retesting the same functionalities over and over again.

But agile testing teams are built to accommodate that, and they should have the ability to plan in advance for such situations. They can follow approaches like implementing thorough white-box testing, continuously automating tested features, having acceptance test suites in place, and relying on more API-level tests rather than UI tests, especially in the initial stages of development when the user interface may change a lot.

These techniques lighten the testing team’s burden so that they can save their creative energies to find better user scenarios, defects, and new ways to exercise the system under test.

 

Let the Agile Manifesto Guide Your Testing

When agile testers have dilemmas and practical problems, they can look to the Agile Manifesto for answers. Keep it in mind when designing and implementing test efforts; the Agile Manifesto’s values will guide you to the best choice for your team and your customers.


 

Hope you liked my write-up, please share your views too!

Happy Testing!

Nishi

 

Pesticide Paradox in Software Testing

Pests and Bugs sound alike?? They act alike too!! 

Boris Beizer, in his book Software Testing Techniques (1990) coined the term pesticide paradox to describe the phenomenon that the more you test software, the more immune it becomes to your tests.

Just like, if you keep applying the same pesticide, the insects eventually build up resistance and the pesticide no longer works. Software undergoing the same repetitive tests build resistance to them, and they fail to catch more defects after that.

  • Software undergoing the same repetitive tests eventually builds up resistance to them.
  • As you run your tests multiple times, they stop being effective in catching bugs.
  • Moreover, part of the new defects introduced into the system will not be caught by your existing tests and will be released onto the field.

Solution: Refurnish and Revise Test Materials regularly

In order to overcome the pesticide paradox, testers must regularly develop newer tests exercising the various parts of the system and their inter-connections to find additional defects.

Also, testers cannot forever rely on existing test techniques or methods and must be on the look out to continually improve upon existing methods to make testing more effective.

It is suggested to keep revisiting the test cases regularly and revising them. Though agile teams provide little spare time for such activities, but the testing team is bound to keep planning these exercises within the team in order to keep the best performance coming. A few ideas to achieve this:

  • Brainstorming sessions – to think of more ideas around the same component testing
  • Buddy Reviews – New joinees to the team are encouraged to give their fresh perspective to the existing test scenarios for the product, which might get some new cases added.
  • Strike out older tests on functionalities that are changed / removed
  • Build new tests from scratch if a major change is made in a component – to open a fresh perspective

 

UPDATE–

This article has been recommended and used as a reference by HANNES LINDBLOM in his blog at https://konsultbolag1.se/bloggen/veckans-testartips-15-tur-genom-variation