Retrospectives are an integral part of every project we undertake, as well as a key ceremony in the Scrum lifecycle. Agile principally stresses the need to perform periodic meetings to reflect on the functioning of the team, their processes and actions and try to improve their shortcomings, so retrospectives are essential. The team gets to look back on their work and answer three key questions: What went well? What did not go well? How can we improve?
Even if agile teams perform retrospectives as a regular part of their project lifecycle, there are a few common mistakes they may be making due to a lack of understanding, perspective or communication, and these mistakes can prevent obtaining the maximum benefits of the retrospective.
In my article for Gurock TestRail blog, I have discussed five common mistakes that we must avoid in Agile Retrospectives.
Being a Project manager you often need to take on new challenges and create guidelines for projects in a field you are not always familiar with.
You might have some experience working with a team of software developers, which gives you insight into the relevant testing disciplines. Or you may have directly come in as a project manager and need to begin understanding the process from scratch. Whatever the case may be, we are sure you already have enough on your plate. That is why I have gathered a few basic guidelines – both technical and methodological – to help you succeed in your new assignment as a test project leader!
My guest post for PractiTest is now up on the QA Learning Centre-
Dedicated to all PMs – here I discuss the Software Testing 101 making this a guide to all PMs to all things crucial in test process management. Read More..
>>>Agile testing leaves very little time for documentation. It relies on quick and innovative test case design rather than elaborate test case documents with detailed steps or results. This mirrors the values of Exploratory Testing. When executed right, it needs only lightweight planning with the focus on fluidity without comprehensive documentation or test cases.
From a QA viewpoint, we can learn from the Agile Manifesto key goals; communication, efficiency, collaboration and flexibility. If you improve your QA team in these areas, it will have a positive effect on your QA strategy and company growth.
>>>The Manifesto for Agile Software Development forms the golden rules for all Agile teams today. It gives us four basic values, which assure Agilists a clearer mindset and success in their Agile testing.
Although these values are mostly associated with Agile development, they equally apply to all phases, roles and people within the Agile framework, including Agile testing. As we know, Agile testers’ lives are different, challenging and quite busy. They have a lot to achieve and contribute within the short Agile sprints or iterations, and are frequently faced with dilemmas about what to do and how to prioritise, add value and contribute more to the team.
As testers, we all worked with Excel at some point in our career. If you are using
excel now this article is for you 🙂 Excel is used as test management, documentation
and reporting tool by many test teams. At early stages, most teams rely on excel
spreadsheets for planning and documenting tests, as well as reporting test
results. As teams grow, sharing information using excel sheets becomes problematic.
What used to be easy and intuitive, becomes very challenging. Encountering
difficult work scenarios like the below, becomes a day-to-day reality:
The simple task of figuring out which excel has the test cases you need to run, takes longer and longer.
Gathering the status of the testing tasks and your project can only be done by going to each desk one by one and asking them.
A tester mistakenly spent 6 hours running wrong tests in the wrong environment because of an incorrect excel sheet which was not the updated copy.
Tester’s routinely lose their work or test results because of saving/ overwriting or losing their excel sheets.
Most test activities are not being documented or accounted for because writing tests is considered a luxury.
If one or more of these scenarios sound familiar to you, you are being held back in
your testing efforts by excel!
In my latest guest post for PractiTest, I have written about how excel can be a roadblock instead of a useful tool for your testing. To read the complete article, click here—->
In here I talk about issues related with use of excel in relation to
Cross environment testing is viewed as a tedious and repetitive task and is generally a challenge to accommodate within an agile life cycle. In my recent guest post for Gurock, I showcased my own experience in an agile release wherein we created a strategy for coverage of a number of test environments to support.
Using simple steps, discussions, base-lining and agreement within the scrum team, we created a scalable interoperability test strategy which was later supplemented with automation and other tools. In this article I have talked about-
Want to Outsource your testing? Here are my “5 tips to manage your outsourced testing”
I have begun collaborating with PractiTest and with the help of Rachel, my article has now been published @PractiTest Learning Center.
In this article I have discussed about the practical risks for teams that outsource their testing efforts. I have brought forward 5 key tips and tricks to manage their outsourced software testing along with team and people issues as follows:
“5 Ways DevOps complements Agile” – As an industry practitioner who has worked in agile for almost a decade now, I have always seen DevOps as a friend and an extension of agile. Using this article I have tried to put across my view on how this handshake between developers and operations personnel works in favor of bridging the gap from software creation to software delivery.-
Continuing the discussion on the Hawaii Missile Alert which made headlines in January 2018 and turned out to be a false alarm and ended up raising panic amongst almost a million people of the state all for nothing, (read here for detailed report) I would like to bring back the focus on implications of poor software design leading to such human errors.
Better software design is aimed at making the software easier to use, fit for its purpose and improving the overall experience of the user. While software design focuses on making all features easily accessible, understandable and usable, it also can be directed at making the user aware of all possibilities and implications before performing their actions. Certain actions, if critical, can and should be made more discrete than the others, may have added security or authorisations and visual hints indicating their critical nature.
Some of the best designers at freelancer.com came together to brainstorm ideas for better software design and to revamp the Hawaii government’s inept designs. They ran a contest amongst themselves to come up with the best designs that could avoid such a fiasco in future.
Sarah Danseglio, from East Meadow, New York, took home the $150 grand prize, while Renan M. of Brazil and Lyza V. of the Philippines scored $100 and $75 for coming in 2nd and 3rd, respectively.
Here is a sneak peek into how they designed the improved system :Read More »
Here is my experience report on using Tours in my testing project-
WHAT ARE TOURS —
In testing, a tour is an exploration of a product that is organized around a theme. Tours bring structure and direction to exploration sessions, so they can be used as a fundamental tool for exploratory testing. They’re excellent for surfacing a collection of ideas that you can then further explore in depth one at a time, and they help you become more familiar with a product—leading to better testing.
I had just started working with a new product, a web-based platform that was a fairly complex system with a large number of components, each with numerous features. Going into each component and inside every feature would take too much time; I needed a quick, broad overview and some feedback points I could share as queries or defects with my team.
I realized my exploration of the application would need some structure around it. Using test sessions and predefined charters, I could explore set areas and come back with relevant observations—I had discovered tours.
Cem Kaner describes tours as an exploration of a product that is organized around a theme. Tours help bring structure and a definite direction to exploration sessions, so they can be used as a fundamental tool for exploratory testing.
Tours are excellent for surfacing a collection of ideas that you can then further explore in depth one at a time. Tours testing provides a structure to the tester on the way they go about exploring the system, so they can have a particular focus on each part and not overlook a component. The structure is combined with a theme of the tour, which provides a base for the kind of questions to ask and the type of observations that need to be made.
In the course of conducting a tour, testers can find bugs, raise questions, uncover interesting aspects and features of the software, and create models, all done on the basis of the theme of the tour being performed.
Let’s discuss some common types of tours that are useful for testers and look at some examples.
Software impacts human lives – let us put more thought into it!
Here is what happened and my take on how software design may have been partly responsible and could be improved >>
Miami state in the US received a massive panic attack on Saturday the 13th of January 2018. More than a million people in Hawaii were led to fear that they were about to be struck by a nuclear missile due to circulation of a message sent out by the state emergency management. The message sent state wide just after 8 a.m. Saturday read: “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.”
The residents were left in a state of panic. People started scrambling to get to safe places, gathering supplies and even saying their goodbyes. Some took shelter in manholes, some gathered their kids into the most sheltered rooms in their homes like bathrooms or basements, some huddled in their closets and some sent out goodbye messages to their loved ones.
Turned out it was a false alert. Around 40 minutes later, the agency sent out another message saying that it was a false alarm sent out by mistake!
The questions being asked was – how could this happen and why did it take 40 minutes to check and issue an all clear?
Why Did This Happen?
Investigations into the incident were revealed and the governor stated that “It was a procedure that occurs at the change of shift which they go through to make sure that the system is working, and an employee pushed the wrong button.”
The error occurred when, in the midst of a drill during a shift change at the agency, an employee made the wrong selection from a “drop-down” computer menu, choosing to activate a missile launch warning instead of the option for generating an internal test alert. The employee, believing the correct selection had been made, then went ahead and clicked “yes” when the system’s computer prompt asked whether to proceed.
Analysing the Root Cause
But is the fault only at human level? The software being used for such critical usage also needs to help out to avoid the possibility of such human errors.
After all triggering such a massive state-wide emergency warning should not have been as simple as push of a wrong button by a single person!
Could a better design of the software have prevented this kind of scenario from happening?
As reported, the incorrect selection was made in a dropdown – which lets imagine would look something like this-
After the selection was made, the system sent a prompt and the employee, believing the correct selection had been made, then went ahead and clicked “yes”.
So by this information we can assume that the prompt would have been something generic like
Though it definitely is a human error but isn’t the system also at fault for letting this happen so easily?
Better Design Ideas – More Thought – Improving Your Software
By putting in some extra thought into design of the software we can make it more robust to avoid such incidents.
Here are some things that could have helped design it better –
Do not have the TEST options placed right next to the ACTUAL emergency options!
Have different fields or perhaps different sub menus inside the dropdown as categories.
>> Always have the TEST category of warnings higher up in the list
>>Have the Default Selection in the dropdowneither as BLANK or as one of the TEST warnings and not the actual ones
>>Having the actual warnings section lower down and separated away from the similarly worded TEST warning would ensure lower chance of wrongful selection of the similar named option from the dropdown
The prompt message must be made unique to each scenario and in case of selecting a real warning issue action, the prompt must ask the user to specify the emergency.
>>Make the prompt appear critical with use of color and text
>>A critical prompt must catch the user’s attention and not be similar to the other screens and popups of the system, to avoid the possibility of clicking on it in a hurry.
>>Placement of Yes and No buttons on unusual sides (Yes is on the left which is not typical) avoids the click of the button – also used Red and Green to signify the importance situation. Red is the usual code for danger.
Additional level of authorisation must be added to the scenarios of real emergency warnings being issued. So, for the TEST actions, user may proceed and begin the drill but in case they select ACTUAL warning then the steps take it to another level of authorisation where another employee – a peer or a senior- reviews the action and performs the final warning issue.
>>This prevents erroneous actions and also some possibility of hackers or notorious people issuing false warnings just by gaining access via one user.
>>Define your hierarchy of users or approvals for each case of emergency.
These ideas may sound basic but all these are components of good Usability of the software, its appropriateness of purpose and setting up basic security in usage of the application.
We are just playing around human psychology, easier understand-ability and attention spans.
Let us endeavour to give a little more ‘thought’ to the system