Getting Better at Agile — Using Agile Pods

Pod

Agile is an ever-evolving field, and its rate of change has exploded exponentially in the last decade. Businesses are constantly trying to catch up with the market pace—and evolving their processes accordingly. Scrum has seen huge success and led to easier adoption of agile in a lot of organizations, but there is a demand to further speed things up while allowing for flexibility in resource and time management. This pursuit led my team to what we call agile pods.

Agile pods are small custom agile teams, ranging from four to eight members, responsible for a single task, requirement, or part of the backlog. This organizational system is a step toward realizing the maximum potential of our agile teams by involving members of different expertise and specialization, giving complete ownership and freedom, and expecting the best quality output.

My company, Winshuttle, started the transition toward agile pods by dividing our one large product team of about twenty people into smaller teams of four to five members and calling them pods. The requirements got divided as independent deliverables for each pod based on the expertise of its team members. Because the team members were responsible for the design, plan, delivery, and quality of the output, the ownership level increased dramatically. Also, because complete communication happened directly, the results were faster and more precise.

Pod Organization

An agile pod is designed as per the requirement of the deliverable, involving varying levels of management, development expertise, QA, and creative talent. These teams are customizable and may change depending on the current requirements, creating a relevant ecosystem leading to maximum innovation and faster delivery times.

Pod Features

Agile pod teams are designed to be self-sufficient. The team is self-organizing and works with minimum supervision, creating a higher sense of ownership and maturity. Also, because most required expertise is available at hand within the team, there is a minimum level of dependency on people outside the pod. The pods stay together until the requirements keep coming for their team—say, for one release cycle.

A pod consists of the following team members:

1. Core team: These team members are dedicated full-time to working for their pod. They are part of all discussions, decisions, and standup meetings. The competencies of the core team members add up to the competency of that pod. The core team members may be shuffled between pods during or after the release cycle according to the expertise required.

2. Part-time specialists: These team members are available as part-time resources to support specialized project needs of different pods. They may be working for multiple pods at the same time. Examples would be a UI designer, a white box tester, or an automation engineer.

3. Pod leader: The pod is led by a pod leader, who is responsible for prioritizing the work with the business management team, clarifying requirements, and replenishing the queue for upcoming projects periodically.

The project management team does not interfere with the working of the pod apart from supplying the requirements and clarifying doubts about functionality or priority of tasks to be picked. The pod leader is an interface between the pod and the project management.

Like in Scrum, the daily standup meeting is scheduled so everybody can meet and discuss their updates, tasks, doubts, and risks. The part-time specialists may be included in the stand-ups whenever they are working with that pod.

Leveraging and Working in an Agile Pod

In order to leverage an agile pod, it is important to define clear requirements and then have an onboarding time for the agile team members. The skills and specialization of the team members must be kept in mind when putting them together in a pod, and the onboarding time must be dedicated toward getting them to understand the process and develop a level of understanding among themselves. Working in such a self-organizing and disciplined structure requires guidelines, team spirit, and acceptance among the team members.

In my experience as a pod leader, the first few weeks can be empowering and taxing at the same time. When our team got the responsibility to plan out the complete release cycle, requirements, design, and delivery schedule, participating in all those discussions brought exposure and a broader perspective to all the team members. Though the planning, triages, and prioritizing were new to most team members, once we got the hang of it, it actually felt energizing.

Further on, the communication became direct from the pod to all nodes outside the team, such as the project management group, the outsourced UI design team, and the other pods working in parallel. This reduced the communication time, filled the information gaps, and gave a voice to all opinions alike, and many brilliant ideas came our way.

Also, a major difference came in the dynamics between the development members and testers. We shifted our seating arrangements together, putting the entire pod, including developers and testers, together in a cluster. There were no secrets to be kept! Because the quality was an inherent part of the deliverable expectations from the pod, developers volunteered to help review the test scenarios prepared before testing happened on their components. Testers volunteered to buddy-test each component before it was finalized and released for the iteration level testing. This led to lesser issues in our production level builds, and demos to the project management mainly went bug-free. This also meant no altercations within the team on issues because everything was discussed, reviewed, and tested at buddy level.

All in all, at the end of a release cycle of about five weeks, we released a major chunk of functionalities that would have been difficult to get delivered in the normal scrum manner by any manager in the same amount of time.

So, give agile pods a shot and see how much good may come out of it!

Do share your experience..

Also read my write up about Agile Pods – published at –

http://www.agileconnection.com/article/using-agile-pods-realize-potential-your-team?page=0%2C2

Quality over Quantity

As is a well-known fact, Testing is a never-ending activity and is never 100% complete. No software can be called totally defect-free and hence there is always the scope for more testing.

The problem with today’s competitive software industry is the over-whelming demand for highest quality at lowest possible prices and time. The pressure of quick delivery leads to time and resource crunch and the testing phase being the last of the development cycle bears the brunt. At this stage, practicality demands to make use of Risk-based analysis to find the most important and impacted areas to be tested thoroughly, while smoke-level focus on the rest.

Though this approach is accepted by all in the team as well as the management, but still the risk of running into issues later with other parts of software is never acceptable. So, the onus is still set on the testing team, who are left with no time but with full responsibility in case anything breaks down!

Also, when each hour counts to the maximum in such critical times, there is increasing pressure within the testing team to perform the best. Managers find ways to quantify each hour and minute by ways of matrices and charts counting the number of issues found, hours logged and number of builds failed by each team member.

Solution: Weigh Quality over Quantity

It is important for an organization to understand the testing activities and the bent of mind that it requires. It is suggested to always weigh Quality of work delivered over the Quantity of issues / hours or tasks. Specially with manual testing, it is not possible to ever quantify the efforts.

Automated testing offers some scope of quantification in terms of Coverage of code, no. of environments and locales etc. But overall testing remains a creative profile.

So instead of pitching team members against each other by comparing their statistics, they should be encouraged to sit together and brainstorm their ideas, share their knowledge, review each other’s test scenarios and buddy test different stories.

Have you faced the pressure of quantification at your work?

Please share your views!

Pesticide Paradox in Software Testing

Pests and Bugs sound alike?? They act alike too!! 

Boris Beizer, in his book Software Testing Techniques (1990) coined the term pesticide paradox to describe the phenomenon that the more you test software, the more immune it becomes to your tests.

Just like, if you keep applying the same pesticide, the insects eventually build up resistance and the pesticide no longer works. Software undergoing the same repetitive tests build resistance to them, and they fail to catch more defects after that.

  • Software undergoing the same repetitive tests eventually builds up resistance to them.
  • As you run your tests multiple times, they stop being effective in catching bugs.
  • Moreover, part of the new defects introduced into the system will not be caught by your existing tests and will be released onto the field.

Solution: Refurnish and Revise Test Materials regularly

In order to overcome the pesticide paradox, testers must regularly develop newer tests exercising the various parts of the system and their inter-connections to find additional defects.

Also, testers cannot forever rely on existing test techniques or methods and must be on the look out to continually improve upon existing methods to make testing more effective.

It is suggested to keep revisiting the test cases regularly and revising them. Though agile teams provide little spare time for such activities, but the testing team is bound to keep planning these exercises within the team in order to keep the best performance coming. A few ideas to achieve this:

  • Brainstorming sessions – to think of more ideas around the same component testing
  • Buddy Reviews – New joinees to the team are encouraged to give their fresh perspective to the existing test scenarios for the product, which might get some new cases added.
  • Strike out older tests on functionalities that are changed / removed
  • Build new tests from scratch if a major change is made in a component – to open a fresh perspective

 

UPDATE–

This article has been recommended and used as a reference by HANNES LINDBLOM in his blog at https://konsultbolag1.se/bloggen/veckans-testartips-15-tur-genom-variation

 

 

Automation Test Suites Are Not God! 

Earlier this year, one of my articles was published at http://www.agileconnection.com , wherin I highlighted the role and use of automation in an agile context and the irreplaceable importance of manual testing.

Here are excerpts from my article – for the complete text , visit 

http://www.agileconnection.com/article/automation-test-suites-are-not-god 

          Automation Test Suites Are Not God!

Working in an agile environment makes it essential to automate system testing to rerun tests in each iteration. But in the nascent stages of some systems, there are changes in the UI, product flow, or design itself in each iteration, making it difficult to maintain the automation scripts. The role of automation in agile context is repetition of regression and redundant tasks, while the actual testing happens at the hands of manual testers. The creativity, skills, experience, and analytical thought process of a human mind cannot be replaced by automated scripts. This belief has to be ingrained in every organization’s culture in order to achieve the best quality.

Talking about software testing today is incomplete without the mention of test automation. Automation has become an important part of testing tasks and is deemed critical to the success of any software development team—and rightly so, with all its benefits like speed, reliability, reducing redundancy, and ensuring complete regression cycles within tight deadlines.

But the common perception of team managers and policy makers is that automation tools are the complete package for testing activities, and they begin expecting the world out of them. A common misconception is that test automation is the “silver bullet” for improving quality, and organizations start to believe that investing once in an automation tool ends all other testing-related tasks and investments. Managers start expecting everything out of their automation suites—100 percent coverage, minimum run times, no maintenance, and quality delivered overnight. It’s basically expecting godlike miracles to happen! Hence, there arises a need to educate and understand the actual purpose of automation and the importance of manual tests in this context.

Working in an agile environment makes it essential to automate system testing due to the bulk of regression tests required in every iteration. But what makes test automation hard within an agile context is its very inherent nature of constant change. Because the system under test changes continuously, the automation scripts have to be changed so often that they actually become a task themselves instead of a benefit.

As tester James Bach wrote, Test Automation Rule #1 is “A good manual test cannot be automated.” According to this thought, it is certainly possible to create a powerful and useful automated test, which will help you know where to look and to use your manual exploration. But the maximum benefit thereafter will come out of using the experience and exploration techniques.

This is based on the fact that humans have the ability to notice, analyze, and observe things that computers cannot. Even for unskilled testers, for amateur minds, or in total absence of any knowledge, requirements, or specifications of the system under test, people can observe and find a lot of things no tool will be able to.

In a true sense, automation is not actually testing; it is merely the repetition of the tasks and tests that have been performed earlier and are only required as a part of regression cycles. Automation is made powerful by the various reports and metrics associated with it.

But the actual testing still happens at the hands of a real tester, who applies his creativity, skills, experience, and analytics to find and report bugs in the system under test. Once his tests pass, they are then converted to automated suites for the next iteration, and so on.

So the basic job of automation suites is to free up the time and resources of the manual testers from the repetitive and redundant tasks so that they are able to concentrate and focus on the new features delivered and find maximum bugs in those areas.

Therefore, it is very important to not get caught up in the various charts, coverage, and metrics of our test suites. Instead we must focus on our projects’ context and requirements and, based on those designs, our automated versus manual tests ratio.

A simple example to illustrate it would be testing a web form with multiple inputs and spreading across multiple pages. An automation script created for it would ideally open the webpage, input the values and then submit them, and maybe check a couple of validations on input fields along the way. So, the process would ideally be

Observe > Compare > Report

The automation should perform the mostly happy path of a user scenario, observe the behavior as per the set expected results, and inform whether the form passes or fails at the end.

On the other hand, if we perform manual tests on the same web form, we should try to enter the inputs in a different order; navigating to and from the pages and observing whether the inputs are retained or not; and looking for usability issues such as difficulty in locating the fields and navigating buttons, the font being too small or not clear in some setting, or form submission taking so long that some performance benchmarking might be required.

Perform > Analyze > Compare (with existing system, specifications, experience, discussions) >

> Inform (and discuss) > Recheck (if needed) >

> Personal Opinion and Suggestions > Final Report.

It shows that though the web form could have been easily tested by the automation test suite and been passed by it, we might miss out on other valuable aspects if we skip the manual and experience-based tests.

Markus Gartner, author of the book ATDD by Example, summed it up nicely when he wrote, “While automated tests focus on codifying knowledge we have today, exploratory testing helps us discover and understand stuff we might need tomorrow.” 

Automation test suites, though essential, should not be thought of as the “silver bullet” of quality. The actual test efforts still lie with the manual tester’s expertise and skills, without which actual quality cannot be ingrained into the system. We must keep a check on the unrealistic expectations for automation tests, because after all, automation suites are not God!