Is Test Automation Alienating Your Business Testers?

With numerous test automation tools and frameworks available today, many in the software testing industry are focused on learning them all. It is important to stay updated with new technology. But are testers losing something in the race to become more technical and equipped with automation skills?

In my article published at TestRail blog, I examine ways to see if your test automation is becoming so technical and code-intensive that it’s in danger of alienating the subject-matter expert testers who best know the core of your business?

Technology should serve people

It is important to understand and remember that test automation tools have been designed to make testers’ lives easier and better. They are not intended to replace testers or overpower them. They make tests execute faster, with more accuracy and fewer errors, so if they eliminate anything, it is redundancy and repetitive work. This technology is meant to serve testers — to save their time and effort and give them more freedom.

To this end, the first intent behind adopting any technology must be its fitness for use in the project, not its popularity in the market. The skills needed to adopt the tool and begin using it in the project should be easily obtained by hands-on learning or training. Read full article ->

Testing is creative

Testing is a creative job, and it always has been. The advent of new tools and technology has not changed this fact. Tools can do part of a tester’s job, but they still cannot test. Although some people may argue on behalf of artificial intelligence and machine learning that can take over many actively creative aspects, we are not there yet. We still want and need a human to capture the creative tests, discuss the pros and cons of design aspects, peer-review test cases, and report problems.

Everyone can contribute to test automation

When we look at testers’ resumes, the tendency is to look for tools they can work with. But the more important skill we need is their ability to contribute to test automation in one way or another. We cannot judge this fact just by asking if a person is able to write test automation scripts or knows a certain programming language. They may be able to learn the Gherkin format to design and write feature files for Cucumber tests. Or if you decide to adopt a keyword-driven framework, they could pick up the keywords and begin writing tests so that the same test cases can double as test scripts.

Read More »
Advertisement

The Partnership of Testing and Checking

Human Testing is a craft that is more than executing a bunch of tests, performing clicks and actions. A tester has a unique understanding of the system and ways to critique it. Over time, the tester develops a deeper comprehension of the application and its intricacies, integrations, weak points, and history. This makes them the best judge to find out the failure points of the system and comment on its health.

The Product Risk Knowledge Gap is the difference between what we know about the product and what we need to know. The purpose of testing is to close or at least reduce this gap.

While automated checks can help in determining problems in what we know (and have scripted as checks), it may not help as much in the risk areas of what we do not know about the product. That requires exploration, creativity, intuition and domain knowledge. This is the human aspect of testing.

The creative and human aspects of testing lie with the tester, which I have experienced as well as written about a few years back as a hands-on tester myself here – https://testwithnishi.com/2014/12/31/automation-test-suites-are-not-god/

Your Name: Review:

Automated Checks-

Automated scripts have some built-in steps in the form of test data that we pre-define and verifications that we add. These steps are helpful for areas of the application that we need to check, double-check or re-check a number of times, and because these types of checks can be made explicit, they can be automated. Since the same steps will be performed the same way over and over again, it is better called “checking” rather than “testing.”

Read More »

Automation Test Suites Are Not God! 

Earlier this year, one of my articles was published at http://www.agileconnection.com , wherin I highlighted the role and use of automation in an agile context and the irreplaceable importance of manual testing.

Here are excerpts from my article – for the complete text , visit 

http://www.agileconnection.com/article/automation-test-suites-are-not-god 

          Automation Test Suites Are Not God!

Working in an agile environment makes it essential to automate system testing to rerun tests in each iteration. But in the nascent stages of some systems, there are changes in the UI, product flow, or design itself in each iteration, making it difficult to maintain the automation scripts. The role of automation in agile context is repetition of regression and redundant tasks, while the actual testing happens at the hands of manual testers. The creativity, skills, experience, and analytical thought process of a human mind cannot be replaced by automated scripts. This belief has to be ingrained in every organization’s culture in order to achieve the best quality.

Talking about software testing today is incomplete without the mention of test automation. Automation has become an important part of testing tasks and is deemed critical to the success of any software development team—and rightly so, with all its benefits like speed, reliability, reducing redundancy, and ensuring complete regression cycles within tight deadlines.

But the common perception of team managers and policy makers is that automation tools are the complete package for testing activities, and they begin expecting the world out of them. A common misconception is that test automation is the “silver bullet” for improving quality, and organizations start to believe that investing once in an automation tool ends all other testing-related tasks and investments. Managers start expecting everything out of their automation suites—100 percent coverage, minimum run times, no maintenance, and quality delivered overnight. It’s basically expecting godlike miracles to happen! Hence, there arises a need to educate and understand the actual purpose of automation and the importance of manual tests in this context.

Working in an agile environment makes it essential to automate system testing due to the bulk of regression tests required in every iteration. But what makes test automation hard within an agile context is its very inherent nature of constant change. Because the system under test changes continuously, the automation scripts have to be changed so often that they actually become a task themselves instead of a benefit.

As tester James Bach wrote, Test Automation Rule #1 is “A good manual test cannot be automated.” According to this thought, it is certainly possible to create a powerful and useful automated test, which will help you know where to look and to use your manual exploration. But the maximum benefit thereafter will come out of using the experience and exploration techniques.

This is based on the fact that humans have the ability to notice, analyze, and observe things that computers cannot. Even for unskilled testers, for amateur minds, or in total absence of any knowledge, requirements, or specifications of the system under test, people can observe and find a lot of things no tool will be able to.

In a true sense, automation is not actually testing; it is merely the repetition of the tasks and tests that have been performed earlier and are only required as a part of regression cycles. Automation is made powerful by the various reports and metrics associated with it.

But the actual testing still happens at the hands of a real tester, who applies his creativity, skills, experience, and analytics to find and report bugs in the system under test. Once his tests pass, they are then converted to automated suites for the next iteration, and so on.

So the basic job of automation suites is to free up the time and resources of the manual testers from the repetitive and redundant tasks so that they are able to concentrate and focus on the new features delivered and find maximum bugs in those areas.

Therefore, it is very important to not get caught up in the various charts, coverage, and metrics of our test suites. Instead we must focus on our projects’ context and requirements and, based on those designs, our automated versus manual tests ratio.

A simple example to illustrate it would be testing a web form with multiple inputs and spreading across multiple pages. An automation script created for it would ideally open the webpage, input the values and then submit them, and maybe check a couple of validations on input fields along the way. So, the process would ideally be

Observe > Compare > Report

The automation should perform the mostly happy path of a user scenario, observe the behavior as per the set expected results, and inform whether the form passes or fails at the end.

On the other hand, if we perform manual tests on the same web form, we should try to enter the inputs in a different order; navigating to and from the pages and observing whether the inputs are retained or not; and looking for usability issues such as difficulty in locating the fields and navigating buttons, the font being too small or not clear in some setting, or form submission taking so long that some performance benchmarking might be required.

Perform > Analyze > Compare (with existing system, specifications, experience, discussions) >

> Inform (and discuss) > Recheck (if needed) >

> Personal Opinion and Suggestions > Final Report.

It shows that though the web form could have been easily tested by the automation test suite and been passed by it, we might miss out on other valuable aspects if we skip the manual and experience-based tests.

Markus Gartner, author of the book ATDD by Example, summed it up nicely when he wrote, “While automated tests focus on codifying knowledge we have today, exploratory testing helps us discover and understand stuff we might need tomorrow.” 

Automation test suites, though essential, should not be thought of as the “silver bullet” of quality. The actual test efforts still lie with the manual tester’s expertise and skills, without which actual quality cannot be ingrained into the system. We must keep a check on the unrealistic expectations for automation tests, because after all, automation suites are not God!