Training on Selenium – CP-SAT Certification Batches @Bangalore

CP-SAT stands for “Certified Practitioner – Selenium Automation Testing” is a certification prepared and honoured by “Agile Testing Alliance” & “University Teknologi Malaysia (UTM)”, which is the Selenium training course I have been conducting in Bangalore. We conducted a public batch over the last weekend as well as a corporate batch this month where participants got to build, enhance and maintain the scripts in Eclipse IDE and Selenium 3.x WebDriver.

Training Approach:   This course is designed to train agile professionals with the basics of testing web applications using Selenium leading to advanced topics. I approached the training as a combination of theory as well as hands-on execution of scripts using the features of Selenium with ample time given to practice and kept the focus on the practical application of Selenium to resolve common web automated testing challenges.

Agenda: This course focuses on latest Selenium 3.x, its advantages,  WebDriver 3.x configuration and execution related concepts using JUnit and TestNG frameworks, Selenium Reporting mechanism, Data Driven Testing, getting started with Selenium Grid concepts, handling various types of web elements, iframes, dynamic lists etc. To know more about course syllabus – please click here

Course Schedule: The course consists of 3 full days of training, hands on assignments and practical, continuing on later with 5 days of 2-hour web sessions live with the trainer for more learning and queries and clarifications. Thereafter the candidates are given a mock exam to attempt which gives an idea about the real certification exam. The final exam consists of 2 sections – Theory which is Online Objective type Quiz and Practical which a 2 hour exam with given case studies implementation and submission.

We have received tremendous response from the CP-SAT training batches and many more interested candidates for upcoming scheduled training sessions at Bangalore.

Here is a sneak peek into the training room and also some wonderful feedback shared by our candidates-

Public CP-SAT Batch @Bangalore

Corporate CP-SAT Batch @Bangalore

If interested please check the upcoming batches calendar at – http://ataevents.agiletestingalliance.org/

Happy Learning!
Nishi

 

 

 

 

Better Software Design Ideas for the Hawaii Emergency Alert System

Continuing the discussion on the Hawaii Missile Alert which made headlines in January 2018 and turned out to be a false alarm and ended up raising panic amongst almost a million people of the state all for nothing, (read here for detailed report) I would like to bring back the focus on implications of poor software design leading to such human errors.

Better software design is aimed at making the software easier to use, fit for its purpose and improving the overall experience of the user. While software design focuses on making all features easily accessible, understandable and usable, it also can be directed at making the user aware of all possibilities and implications before performing their actions. Certain actions, if critical, can and should be made more discrete than the others, may have added security or authorisations and visual hints indicating their critical nature.

Some of the best designers at freelancer.com came together to brainstorm ideas for better software design and to revamp the Hawaii government’s inept designs. They ran a contest amongst themselves to come up with the best designs that could avoid such a fiasco in future.

Sarah Danseglio, from East Meadow, New York, took home the $150 grand prize, while Renan M. of Brazil and Lyza V. of the Philippines scored $100 and $75 for coming in 2nd and 3rd, respectively.

Here is a sneak peek into how they designed the improved system :Read More »

Exploratory Testing using “Tours”

My latest article for stickyminds.com “https://www.stickyminds.com/article/using-tours-structure-your-exploratory-testing” talks about using TOURS to enhance the exploratory tests you perform and add more structure and direction to them.

Here is my experience report on using Tours in my testing project-

WHAT ARE TOURS —

In testing, a tour is an exploration of a product that is organized around a theme. Tours bring structure and direction to exploration sessions, so they can be used as a fundamental tool for exploratory testing. They’re excellent for surfacing a collection of ideas that you can then further explore in depth one at a time, and they help you become more familiar with a product—leading to better testing.

I had just started working with a new product, a web-based platform that was a fairly complex system with a large number of components, each with numerous features. Going into each component and inside every feature would take too much time; I needed a quick, broad overview and some feedback points I could share as queries or defects with my team.

I realized my exploration of the application would need some structure around it. Using test sessions and predefined charters, I could explore set areas and come back with relevant observations—I had discovered tours.

Cem Kaner describes tours as an exploration of a product that is organized around a theme. Tours help bring structure and a definite direction to exploration sessions, so they can be used as a fundamental tool for exploratory testing.

Tours are excellent for surfacing a collection of ideas that you can then further explore in depth one at a time. Tours testing provides a structure to the tester on the way they go about exploring the system, so they can have a particular focus on each part and not overlook a component. The structure is combined with a theme of the tour, which provides a base for the kind of questions to ask and the type of observations that need to be made.

In the course of conducting a tour, testers can find bugs, raise questions, uncover interesting aspects and features of the software, and create models, all done on the basis of the theme of the tour being performed.

Let’s discuss some common types of tours that are useful for testers and look at some examples.

Testing Tours

Read More »

Conducting a Webinar on “Strengthening your Agility using BDD” – with ATA

As a part of the webinar series by Agile Testing Alliance (ATA) , I will be conducting a webinar on the topic  “Strengthening your Agility with BDD – A Demo using Cucumber”. Here I will discuss the practical issues in agile teams and the use of Behavior Driven Development to overcome them. I shall also demo a basic BDD framework using Cucumber as a tool and showcase a practical test scenario.

The webinar will cover –

  • Practical issues faced by Agile teams
  • QA issues in fast paced agile
  • Behavior Driven Development – the definition and need
  • Extending the Agile User stories and acceptance criteria in BDD scenarios
  • Demo using Cucumber
  • Usage and Benefits of BDD In agile

Find more details about the webinar at https://www.townscript.com/e/webinar-on-bdd and register soon!

—-UPDATE—

The recorded session is now available at ATA Youtube channel at

 

Thanks

Nishi

 

Hawaii False Missile Alarm – was it entirely a Human Error?

Software impacts human lives – let us put more thought into it!

Here is what happened and my take on how software design may have been partly responsible and could be improved >>

Miami Shocked!

Miami state in the US received a massive panic attack on Saturday the 13th of January 2018. More than a million people in Hawaii were led to fear that they were about to be struck by a nuclear missile due to circulation of a message sent out by the state emergency management. The message sent state wide just after 8 a.m. Saturday read: “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.”

DTcOgHyUQAA5R4A

The residents were left in a state of panic. People started scrambling to get to safe places, gathering supplies and even saying their goodbyes. Some took shelter in manholes, some gathered their kids into the most sheltered rooms in their homes like bathrooms or basements, some huddled in their closets and some sent out goodbye messages to their loved ones.

Turned out it was a false alert. Around 40 minutes later, the agency sent out another message saying that it was a false alarm sent out by mistake!

The questions being asked was – how could this happen and why did it take 40 minutes to check and issue an all clear?

 

Why Did This Happen?

Investigations into the incident were revealed and the governor stated that It was a procedure that occurs at the change of shift which they go through to make sure that the system is working, and an employee pushed the wrong button.”

The error occurred when, in the midst of a drill during a shift change at the agency, an employee made the wrong selection from a “drop-down” computer menu, choosing to activate a missile launch warning instead of the option for generating an internal test alert. The employee, believing the correct selection had been made, then went ahead and clicked “yes” when the system’s computer prompt asked whether to proceed.

Analysing the Root Cause

But is the fault only at human level? The software being used for such critical usage also needs to help out to avoid the possibility of such human errors.

After all triggering such a massive state-wide emergency warning should not have been as simple as push of a wrong button by a single person!

Could a better design of the software have prevented this kind of scenario from happening?

As reported, the incorrect selection was made in a dropdown – which lets imagine would look something like this-

Miami State Emergency Sample System
Miami State Emergency Sample System

After the selection was made, the system sent a prompt and the employee, believing the correct selection had been made, then went ahead and clicked “yes”.

So by this information we can assume that the prompt would have been something generic like

Miami State Emergency Sample Prompt
Miami State Emergency Sample Prompt

 

Though it definitely is a human error but isn’t the system also at fault for letting this happen so easily?

Better Design Ideas – More Thought – Improving Your Software

By putting in some extra thought into design of the software we can make it more robust to avoid such incidents.

Here are some things that could have helped design it better –

  1. Do not have the TEST options placed right next to the ACTUAL emergency options!

Have different fields or perhaps different sub menus inside the dropdown as categories.

Segregating the Actions in the dropdown into categories
Segregating the Actions in the dropdown into categories

 

>> Always have the TEST category of warnings higher up in the list

>>Have the Default Selection in the dropdown either as BLANK or as one of the TEST warnings and not the actual ones

>>Having the actual warnings section lower down and separated away from the similarly worded TEST warning would ensure lower chance of wrongful selection of the similar named option from the dropdown

 

  1. The prompt message must be made unique to each scenario and in case of selecting a real warning issue action, the prompt must ask the user to specify the emergency.
New prompt
Unavoidable Prompt with explicit message

 

>>Make the prompt appear critical with use of color and text

>>A critical prompt must catch the user’s attention and not be similar to the other screens and popups of the system, to avoid the possibility of clicking on it in a hurry.

>>Placement of Yes and No buttons on unusual sides (Yes is on the left which is not typical) avoids the click of the button – also used Red and Green to signify the importance situation. Red is the usual code for danger.

  1. Additional level of authorisation must be added to the scenarios of real emergency warnings being issued. So, for the TEST actions, user may proceed and begin the drill but in case they select ACTUAL warning then the steps take it to another level of authorisation where another employee – a peer or a senior- reviews the action and performs the final warning issue.

>>This prevents erroneous actions and also some possibility of hackers or notorious people issuing false warnings just by gaining access via one user.

>>Define your hierarchy of users or approvals for each case of emergency.

 

These ideas may sound basic but all these are components of good Usability of the software, its appropriateness of purpose and setting up basic security in usage of the application.

We are just playing around human psychology, easier understand-ability and attention spans.

Let us endeavour to give a little more ‘thought’ to the system

  • Think about its real world usage,
  • Implications of a wrong action in the system,
  • Add more practicality into the design,
  • Make space for human mistakes,           
  • Help humans make better & informed decisions,    and
  • Explore all possibilities to avoid such errors.

 

Cheers,

Nishi

 

Getting Featured in the ‘Top 10 Articles of 2017’ at Stickyminds!

Dear Readers

It is my pleasure and honor to share that my article on

                              “Let the Agile Manifesto guide your Software Testing”

published on Techwell Community forum http://www.stickyminds.com in 2017 has now been featured in the list of “Hottest Articles of 2017” , featuring in the Top 10 Most Read articles last year!

You can give it a read at https://www.stickyminds.com/article/let-agile-manifesto-guide-your-software-testing

I am happy that my thoughts, ideas and write-ups are getting noticed, and this motivates me to continue writing more and better always!

Wishing you a great year ahead.

Happy Testing!

 

Guest Post – “Agile Testing Reflections 2017”

My first guest post article for Gurock Software GmbH is now up!!

“Agile Testing Reflections 2017” – I have written this as a look back on 2017 – what we learnt this year and taking these learnings forward to the new year.

Please visit https://blog.gurock.com/agile-testing-reflections-2017/  and give it a read.

The major points I have touched upon in this article are –

         Users are and Always will be King

         DevOps is our Friend

         Skills are More Important than Tools

         ‘Contributor’ vs ‘Tester’

Read the article here –>

Thanks

Nishi

 

 

 

Must have skills for Agile Testers – a key to the entire team’s agility

Software test professionals have shouldered the responsibility of ensuring a quality software delivery. With the advent of agile, testers in agile teams have had a lot more in their plate. Instead of being a quality ‘gateway’ or checkpoint at the end of the software development life cycle, they have become an integral part of each and every phase of creation.

As I like to say, testers are key to the team’s agility and have a crucial part to play in the success of the project as well the team’s agile journey.

In my article published on ATA Agile Testing Alliance blog website at https://atablogs.agiletestingalliance.org/agile/must-have-skills-for-agile-testers-a-key-to-the-entire-teams-agility/ , I discussed the skills and thoughts that every agile tester must focus on to stay abreast with their project’s needs as well as the industry’s pace. Here are excerpts from the article:

—-

Be the User’s Amigo

Agile talks about the ‘3 amigos’ – the Business Analyst, the developer and the tester – all working together to bring a user story to life. To understand the user’s perspective on each and every requirement, a tester can play a crucial role by thinking ahead, asking the right questions during story grooming and design discussions and testing the most probable and useful paths while execution.

QA professionals are also required to think from the user’s chair and provide constant feedback during continuous involvement in usability testing, cognitive walkthroughs, focus groups and surveys etc.

Overall, an agile tester needs to think of the customers as their amigos, have a taste of their experiences, keep note of their problems and bring out the best in the software to keep their buddy happy!

 Learning Areas: Requirement Analysis, Use Cases and User Stories, Usability study, End-to-end testing, exploratory testing and test techniques

 

Keep a check on the ‘Behavior’

The requirements from the market need quick gathering, distillation and development which require collaboration between the entire set of stakeholders in short time scales.

Behavior Driven Development (BDD) uses the concept of ubiquitous language or a semi-formal language which is shared by all team members including developers, testers, business analysts and other non-technical stakeholders. It makes use of simple domain specific language to convert natural requirements into executable tests.

BDD is definitely catching up in the market because of its outside-in, multiple stakeholder and high automation approach suitable to agile projects. So, all my tester buddies, get your hands into BDD and related tools!

Learning Areas: Basics of TDD, BDD, Cucumber, ATDD, Fitnesse, building a Test automation framework

Spin the wheels of ‘Continuous Testing’

Software delivery timelines have reduced tremendously and to meet the expectations of quick delivery and deployments, software testing has to become a part of the delivery pipeline. The advent of DevOps has brought with it various tools to support CT as a part of CD which makes testers an integral part of the devops system. Software testers are required to participate in setting up and maintenance of sustainable test setups and environments, automate tests rapidly, set up and maintain the continuous testing pipeline.

Learning Areas: DevOps fundamentals, Configuration Management Tools, Build automation, Test automation tools and integration into devops pipeline

 

It is all black and white!

Testers focussing solely on ‘black’ box testing and only business level tests would need to think more ‘white’ now. The need of continuous testing can be met only by ensuring API level tests, and moving towards micro services testing which ensures easy test and deployment of new independent pieces of functionality in the code.

With faster delivery cycles, huge interdependencies of systems in real-time and regression overloads, functional tests are insufficient and impossible to run in isolation. For comprehensive tests, API testing will be the foremost requirement to verify dependencies on and with other applications and systems. Adding white box testing skills to their profile will not only be a value add to the resume, but also add an essential key piece to complete the ‘black and white’ picture.

Learning Areas: White Box testing basics, REST APIs, Unit test framework like JUnit, TestNG etc.

 

A tester’s role in an agile team is to provide assistance and support in all areas, so they may need to switch hats frequently, depending on the team’s needs. The success of this endeavour, though, also depends on an open, communicative, and receptive environment fostered within the team. Adding these new skills to your hat can help any agile tester bring more value to their team.

Happy Testing!!

 

Guest Talk @ ETMarlabs meetup for EUROSTAR 2017 #magicoftestinginindia

I was invited to present a guest talk at the meetup organised by ET Marlabs team for EUROSTAR 2017 on 9th Sep 2017- being the first of its kind in India and I gladly obliged! Presented a talk on Agile Manifesto and its learning for keen testers and answering our dilemmas in agile testing. The talk was very well received and brought out some great discussions with the participants. I was accompanied by another guest speaker Mr. Vinay Krishna who spoke about Behavior Driven Development BDD framework using Cucumber which was a very informative session too.

The team at ET Marlabs had also organised some great activities, testing relay game and quiz for the participants which brought out their testing minds and enthusiasm , which was well rewarded too! I would like to thank them for their kind invitation and would encourage them to organise and participate in more such community events!

Have a glimpse here –

Paying Off the Technical Debt in Your Agile Projects

Just as you should not take out a financial loan without having a plan to pay it back, you should also have a plan when incurring technical debt. The most important thing is to have transparency—adequate tracking and visibility of the debt. Armed with the knowledge of these pending tasks, the team can devise a strategy for when and how to “pay off” technical debt.

Learn about managing your technical debt and testing debt in agile teams and share your thoughts on my latest article published at www.stickyminds.com and also at www.agileconnection.com

***** Here are some excerpts from the article for my readers***

Technical debt initially referred to code refactoring, but in today’s fast-paced software delivery, it has a growing and changing definition. Anything that the software development team puts off for later—be it smelly code, missing unit tests, or incomplete automated tests—can be technical debt. And just like financial debt, it is a pain to pay off.

Forming a Plan to Pay Off Technical Debt

Let’s say a development team working on a new project started out following a certain programming standard. They even set up an automated tool to run on the code periodically and give reports on the adherence to these standards. But the developers got busy and stopped running this tool after a sprint or two, and when the development manager asked for a report after a couple of months, there were hundreds of errors and warnings, all of which now need to be corrected.

This scenario happens all the time with agile teams focused on providing as much customer value as possible each sprint. The problem then needs to be fixed immediately, because despite having all the functionalities in place, the team doesn’t want to release code that is not up to production standards.

The team is then faced with a few options for how to service the debt:

  • Negotiate with the product owner on the number of user stories planned for the upcoming sprint in order to have some extra time for refactoring the code
  • Dedicate an entire sprint to code refactoring
  • Divide all errors and warnings among the development team and let them handle the task of corrections within the next sprint, along with their regular development tasks, by scheduling extra hours
  • Plan to spread this activity over a number of sprints and have a deadline for this report before the end of the release
  • Estimate the size of refactoring stories and either plan them into upcoming sprints as new user stories or accommodate them as part of existing user stories

Though these are all viable options, the best approach depends on the team, the context, upcoming deadlines, the risk the team is willing to take, the highest priority for functionalities that need to be shipped, and the collaboration with the product owner.

Again, just like when you take out a financial loan, you should plan to pay off technical debt as quickly as possible using the resources you have. It’s a good idea to perform a risk analysis of the situation and reach a consensus with the team about the best approach to take.

Technical Debt in Testing

Technical debt doesn’t occur only in programming. Testing activities are also likely to incur technical debts over time due to a variety of factors, including incomplete testing of user stories, letting regression tests pile up for later sprints, not automating essential tests every sprint, not having complete test cases written or uploaded to test management tools, not cleaning up test environments before the next iterations, and not developing or testing with all test data combinations on the current features.

Sometimes debt may be incurred intentionally for a short term, such as not updating tests with new test data when testing on the last day of the sprint due to a time crunch, but planning to do it within the first couple of days in the next sprint. As long as the team has an agreement, it’s acceptable to defer some technical debt for a short while.

On occasion, debt may be incurred intentionally for a longer term by planning it in advance, such as deciding to postpone any nonfunctional tests, like performance or security-related tests, on the system until a few sprints are out and features are stable enough to carry out the tests. Again, as long as the team agrees with the risk and has a plan to address it, it is fine to defer certain activities.

Testing technical debt can get us out of tight situations when needed, but you still need to ensure that you plan carefully, remain aware of the debt, communicate it openly and frequently, and pay it off as soon as possible. Having a plan to service these debts reduces your burden over time and assures your software maintains its quality.

Debt-Solutions

Prevention Is Better Than Cure

Avoiding having any technical debt is always preferable. As the saying goes, an ounce of prevention is worth a pound of cure.

Every team has to devise its own strategy to prevent technical debt from accumulating, but a universal best practice is to have a definition of “done” in place for all activities, user stories, and tasks, including for completing necessary testing activities. A definition of “done” creates a shared understanding of what it means to be finished so that everybody involved on the project means the same thing when they say it’s done. It becomes an expression of the team’s quality standards, and the team will become more productive as their definition of “done” gets more stringent.

Here’s a good example of criteria for a team’s definiton of “done” for every user story they work on:

  • All acceptance criteria for the user story must be met
  • Unit tests must be written for the new code and maintain a 70 percent coverage
  • Functional tests must be performed, and exploratory tests must be performed by a peer tester other than the story owner
  • No critical or high severity issues remain open
  • All test cases for each user story must be documented and uploaded in the test management portal
  • Each major business scenario associated with the user story must be automated, added to the regression test suite, and maintain a 70 percent functional test coverage

Verifying that the activities completed meet these criteria will ensure that you are delivering features that are truly done, not only in terms of functionality, but in terms of quality as well. Adhering to this definition of “done” will ensure that you do not miss out on essential activities that define the quality of the deliverable, which will help mitigate the accumulation of debt.

Despite best practices and intentions, technical debt often will be inevitable. As long as the team is aware of it, communicates openly about it, and has a plan in place to pay it off as quickly as possible, you can avoid getting in over your head.

*************