Tuesday, April 15, 2014

Concurrency testing made easy

Having a multi tenant application under test, concurrency testing is the next step of the testing cycle. Testing using concurrent users takes a lot of time and usually ends up in defects not fully reproducible because of timing issues (synchronization) thus making automation of these tests a must.

Automating concurrent tests using selenium / webdriver always brings up one of the most common question in automation testing; Do I need a new window or open a new tab? The correct question should be: how do I open a new session? And here is the tricky part of the tests. If you open a new window or tab you will carry the session from the window you started from. Some sites will suggest, “clearing the cookies” but this involves the whole browser not a specific window thus the newly assigned session id is the same on all existing windows and tabs.

The solution to the above explained problem is to start two or more selenium / webdriver sessions. To simplify these procedures we use a specific Stevia functionality in order to instantiate 2 webdriver controllers. The instantiation is declared in a spring xml file we call test-controllers and includes the following:

<bean id="webDriverControllerSession1"  class="controllers.ControllerSession1" scope="prototype"/>
<bean id="webdriver-s1-driver" class="com.persado.oss.quality.stevia.selenium.core.controllers.registry.Driver"
   p:name="webdriver-s1"
   p:className="controllers.WebDriverS1WebControllerFactoryImpl"/>
<bean id="webDriverControllerSession2"  class="controllers.ControllerSession2" scope="prototype"/>
<bean id="webdriver-s2-driver" class="com.persado.oss.quality.stevia.selenium.core.controllers.registry.Driver"
   p:name="webdriver-s2"
   p:className="controllers.ControllerFactoryImpl"/>


Note: The spring framework schema p should be included in the xml file, and the controller factory implementation should be created in the appropriate packages.

Having the two controllers at hand we can use a Stevia annotation @RunsWithController(controller=controllers.ControllerSession1.class) before any @BeforeClass or @Test annotations.

The next big problem at hand is how to synchronize the sessions in order to have control of the test execution; synchronization in java is quite advanced, there are a lot of techniques, to mention a few: Cyclic barrier and Count Down Latch. These techniques involve heavy coding and it is an overkill to use them. The most appropriate way to synchronize the execution of tests for different controllers is to use the dependency attribute provided by testNG.

Lets use a specific example to put things into perspective:

 Lets assume there is a phonebook application where you can create a custom phone book and assign contacts. You can edit or delete the phonebook as long as it is empty. So lets assume that in session 1 a user created a phonebook and in session 2 a second user assigns a contact. In session 1 the user is not aware that an entry has been added to the phonebook and wants to edit the phonebook name. As soon as the user presses the edit button he / she is presented with an error message stating that you are not allowed to edit on non-empty phonebook.
In the aforementioned scenario there are 2 clear preconditions, as follows:
User in session 1 creates phonebook
User in session 2 adds a contact in phonebook
And a test step with the following action and expected results
User 1 presses the edit button => User is presented with error message

The synchronization sequence is shown below:



For the two preconditions the synchronization could be done using testNG BeforeClass annotations as follows:

@RunsWithController(controllers.ControllerSession1.class)
public void createPreconditionsSession1(String parallelTestsFlag) {
     // User in session 1 creates phonebook
}

@RunsWithController(controllers.ControllerSession2.class)
@BeforeClass(dependsOnMethods = "createPreconditionsSession1")
public void createPreconditionsSession2() throws Exception {
    // User in session 2 adds a contact in phonebook
}

@RunsWithController(controllers.ControllerSession1.class)
@Test(alwaysRun = true, description = "Edit phonebook name")
public void testStep_1() {
    // User presses the edit button => User is presented with error message
}
TestNG will execute createPreconditionsSession1 -> createPreconditionsSession2 -> testStep_1

The next big question that comes to mind with this implementation is how do I run preconditions on different sessions for each step? Lacking a before specific test method and the problem that the usage of a Test annotation is adding noise (step is counted to results as pass even if it is a precondition) we need an extra annotation to execute the precondition. The latest SNAPSHOT of Stevia solves the above problem by introducing an annotation to allow execution of a series of pre and post conditions on a Test annotated step. Lets use the previous example and add a step to our test verifying the story “ User from session 1 created a new phonebook. User in session2 adds a contact. User in session 1 presses the delete button for created phonebook.

@Preconditions(controller=controllers.ControllerSession2.class,value={"test3Pre1", "test3Pre2"}) 
@RunsWithController(controllers. ControllerSession1.class)
@Test(alwaysRun = true, description = "", dependsOnMethods = {"testStep_1"})
public void testStep_2() {
    // User presses the delete button => User is presented with error message
}
public void test3Pre1(){
    // User in session 2 adds a contact on created bucket
}

Note: The second phonebook (delete action test step) is created in the preconditions section along with the first phonebook (edit action test step).

So having these powerful annotations in stevia you can manipulate any test script execution without sacrificing any of the testing standards. As a result the quality is amplified to a new level by having test step pre and post conditions independent of your testing framework (Junit or TestNG).


Read More

Friday, April 4, 2014

Stevia enhancements and documentation

Stevia

Lately, we've been using Stevia a lot; we're also extending it as much as possible (no promises, but phantomjs is in our crosshairs lately). By talking to the community, it has come to our attention that people are having difficulty to start up a Stevia project. To that extend, we've created some content to give you a boost:

  1. Our SNAPSHOT javadoc is regularly published in Github Pages,
  2. We've setup Travis continuous integration and our build is currently
  3. We have a brand-new, all-shining Stevia Quick Start Guide.

We hope that the above additions will give you a helping hand. If not, feel free to send us your bugs/comments via the issue tracker in Github. We're listening.
Read More

Wednesday, February 19, 2014

Page Objects are not enough

The Page Object design pattern is one of the most popular patterns in test automation just because it promotes test maintenance and unifies the way QA teams work. As the pattern dictates, a page object is the representation of an html page using objects. The objects contain the html element addresses (Xpath, Css or DOM) and the user actions on them as methods (press, input, select, etc.).

Although the page objects pattern is widely accepted, our work experience showed that it is inadequate in fully describing an html page because it lacks the ability to represent the business logic of the page. In order to solve this problem we use a complimentary entity to the page object, the business object.



The business object is a class containing the business logic behind the page. An example of html page business logic is the validation of the mandatory fields of a form. The page object by its definition could not contain the aforementioned logic it can only carry the info that pressing the submitting button an error is raised without reasoning. The business object compliments the page object in having a method for trying to submit a form without the mandatory fields completed and an error is raised. The complimenting nature of the business object stands because the business objects uses the page object’s methods to assemble the logic thus its methods are a collection of page object methods.

The breakthrough of this approach in the representation of the html page is that it keeps a clear separation of the logic from the html elements introducing an abstraction layer for the business object thus separating future changes in the page logic from its locators. In addition it may introduce reusability (a business object may exist in more than one pages hence page objects). Using the previous example a possible subtraction of a mandatory field does not affect the page object only the business one thus the change is marked in a single place.


In a later post I will try to expand on how to implement our business objects and what we do in special cases such as repeating pages with in the software under test.

Read More

Tuesday, February 11, 2014

Gray box testing is the way to adapt for automation testing

When we finished the design of our objects for setting up the preconditions of an automated test case we showed it to one of our developers and his first reaction was “it is the same with the app’s domain model”. This comment made me think that knowing the application internals would be extremely useful in developing correct test scripts.  Knowing this had me thinking that the gray box testing would be more appropriate in our automation tests than black box, thus a switch to that direction is worth considering.

Why is gray box more appropriate in automation and not manual testing? Because any automation tester will have the technical skills to understand the internals of a software application and to prepare the appropriate testing code to interface with available hooks or services, something that a manual (non-programming-savvy) tester would not have.

As a case study I used the current project I was working for. In this project there was a front end used as an emulator of an external system. The front-end app is retrieving data from a database. Black box testing ignores were the front-end app gets its data as long as the displayed data is correct. While this is acceptable for any functional test case, the big picture is revealing shady areas of operations. For example for large amount of data we need pagination (more effort and man hours out of the developers), in parallel testing there are significant delays due to concurrency issues (front-end was not designed initially for heavy duty traffic) and in some cases there was data loss misleading the testing team into opening false defects.

Switching in gray box testing to identify the internals (understanding how data is written to database and how it is queried by the front-end) was simple with the usage of some Spring connectors to read the database. The advantages were the following:

  •           No more maintenance for the front end emulator
  •           Parallel testing speed up
  •           Single point of failure (database not emulator)


Switching to gray box testing made our scripts more robust and lifted the maintenance effort from the development team therefore becoming more suitable for our needs.  Expanding the gray box testing technique allowed us to start using services like JMX to communicate with the app in more accurate and direct way, leading in testing various paths inside the app’s code that were not exposed to us initially. This assisted us in revealing not functional but bottleneck-related bugs of the software.

Gray box testing is the way to go for any automation tester because it can reveal potential bottlenecks inside the code before the performance test does and because it simplifies the way test scripts are created. Gray box testing pushes you to learn more tools to hook into the app thus expanding your software development skills thus becoming a better automation engineer.


Read More

Tuesday, November 5, 2013

Modify ReportNG to show tests with known defects in separate column


During the automation regression circles there are some defects that are carried over through the iterations.
So these tests appear as failed to ReportNG. Due to the fact that the new tests may introduce new defects that appear as failed to the report as well, some time is difficult and time consuming to distinguish between the old and the new defects.In this post we will describe a technique to illustrate the tests with the old defects in a separate column in the ReportNG.

The first step is to 'mark' somehow the test steps in which the known defects appear.There are a lot of ways to do this using features of the TestNG framework.One way to accomplish this is to use the expectedExceptions attribute of the @Test class.

The expectedExceptions includes the list of exceptions that a test method is expected to throw. If no exception or a different than one on this list is thrown, this test will be marked a failure.Assuming that the testStep with the known defect fails in an assertion point, the code should be like this:

@Test(expectedExceptions=AssertionError.class)
public void testStepX(){
//Code
//Assertion point that fails
}
So after this code is executed the testStep with the known defect is considered passed.

The second step is to collect all these tests through a listener or a setup class.
So if you have a Super class that all your test classes extends from you can include the following code in this class. (This technique is the native dependency injection of the TestNG framework: http://testng.org/doc/documentation-main.html#native-dependency-injection)

@AfterMethod(alwaysRun = true)
protected void ignoreResultUponExpectedException(ITestResult result) {
if (result.isSuccess() && result.getMethod().getMethod().getDeclaredAnnotations()[0].toString().contains("expectedExceptions=[class")) {
result.getTestContext().getPassedTests().removeResult(result.getMethod());
result.setThrowable(new Throwable("MARKED AS TEST WITH KNOWN DEFECT!!"));
result.getTestContext().getSkippedTests().addResult(result, result.getMethod());
}
}
An alternative way is to do this through a TestNG Listener:

import org.testng.IInvokedMethod;
import org.testng.IInvokedMethodListener;
import org.testng.ITestResult;
public class MyListener implements IInvokedMethodListener{

@Override
public void beforeInvocation(IInvokedMethod method, ITestResult testResult) {

}

@Override
public void afterInvocation(IInvokedMethod method, ITestResult testResult) {
if(method.isTestMethod()){
if (testResult.isSuccess() && testResult.getMethod().getMethod().getDeclaredAnnotations()[0].toString().contains("expectedExceptions=[class")) 
{
testResult.getTestContext().getPassedTests().removeResult(testResult.getMethod());
testResult.setThrowable(new Throwable("MARKED AS TEST WITH KNOWN DEFECT!!"));
testResult.getTestContext().getSkippedTests().addResult(testResult, testResult.getMethod());
}
}
}
}

So to the moment the tests with the known defects seems as skipped (yellow color in ReportNG) with a specific exception message.

One may should stop here since the tests with the known defects are now in the skipped column.Nevertheless this also may be confusing because there are cases that we have additional skipped tests.This is done when a configuration method of the TestNG fails (e.g when @BeforeClass fails).

The point here is that if we want to seperate further the tests with the known defects we must make changes to the native ReportNG code and build again the ReportNG.
The code is available under: https://github.com/dwdyer/ReportNG/downloads
So we must import the ReportNG project in our IDE (Eclipse or NetBeans or IntelliJ IDEA).It is a good idea to convert it to maven project in order to build more easily the jar that we will use.

The basic ReportNG class that is responsible for gathering and post process the TestNG results is the HTMLReporter.java class.The changes in this class is in createResults method in order to put the tests with the known defects in a seperate key for the Velocity Context. ReportNG uses the apache velocity context framework in order to generate the results. For more info about Apache Velocity you can visit: http://velocity.apache.org/

public static final String KNOWN_DEFECTS_TESTS_KEY = "knownDefects";

@SuppressWarnings("deprecation")
private void createResults(List<ISuite> suites, File outputDirectory, boolean onlyShowFailures) throws Exception
{
int index = 1;
for (ISuite suite : suites){
   int index2 = 1;
   for (ISuiteResult result : suite.getResults().values()){
     boolean failuresExist = result.getTestContext().getFailedTests().size() > 0      || result.getTestContext().getFailedConfigurations().size() > 0;
      if (!onlyShowFailures || failuresExist){
         IResultMap skippedTests = result.getTestContext().getSkippedTests();
         IResultMap knownDefects = new ResultMap();    
         for(ITestResult tr:skippedTests.getAllResults()){
         if (tr.getMethod().getMethod().getDeclaredAnnotations()[0].toString().contains("expectedExceptions=[class")) {
           skippedTests.removeResult(tr.getMethod());
           knownDefects.addResult(tr, tr.getMethod());
           }

          }
VelocityContext context = createContext();
context.put(RESULT_KEY, result);
context.put(FAILED_CONFIG_KEY, sortByTestClass(result.getTestContext().getFailedConfigurations()));
context.put(SKIPPED_CONFIG_KEY, sortByTestClass(result.getTestContext().getSkippedConfigurations()));
context.put(FAILED_TESTS_KEY, sortByTestClass(result.getTestContext().getFailedTests()));
context.put(KNOWN_DEFECTS_TESTS_KEY, sortByTestClass(knownDefects));
context.put(SKIPPED_TESTS_KEY, sortByTestClass(skippedTests));
context.put(PASSED_TESTS_KEY, sortByTestClass(result.getTestContext().getPassedTests()));
String fileName = String.format("suite%d_test%d_%s", index, index2, RESULTS_FILE);
generateFile(new File(outputDirectory, fileName),RESULTS_FILE + TEMPLATE_EXTENSION,context);
    }
 ++index2;
  }
 ++index;
 }
}
The final changes should be done in overview.html.vm,reportng.properties and reportng.css files:

In reportng.properties add the property:
knownDefects=Known Defects
In reportng.css add the style for the known defects tests.(I have set the color as pink)
.knownDefects            {background-color: #ff3399;}
.test .knownDefects      {background-color: #ff99cc;}
Finally in the overview.html.vm we will make the most changes in order to illustrate the tests with the known defects in a separate column.(See the changes and additions underlined).If you perform these changes and build a ReportNG.jar (through mvn install) and use this jar to your project's classpath in order to generate the results you will have a seperate column with the known defects.We must notice here that when the defect is fixed the test will fail with a message like: expected exception was ... but... 
This is an indication that you must remove the expectedExceptions attribute from this test step.You can below a preview of how the report should be.

#foreach ($suite in $suites)
<table class="overviewTable">
  #set ($suiteId = $velocityCount)
  #set ($totalTests = 0)
  #set ($totalPassed = 0)
  #set ($totalSkipped = 0)
  #set ($totalFailed = 0)
  #set ($totalKnownDefects = 0)
  #set ($totalFailedConfigurations = 0)
  <tr>
    <th colspan="8" class="header suite">
      <div class="suiteLinks">
        #if (!$suite.invokedMethods.empty)
        ##<a href="suite${suiteId}_chronology.html">$messages.getString("chronology")</a>
        #end
        #if ($utils.hasGroups($suite))
        <a href="suite${suiteId}_groups.html">$messages.getString("groups")</a>
        #end       
      </div>
      ${suite.name}
    </th>
  </tr>
  <tr class="columnHeadings">
    <td>&nbsp;</td>
    <th>$messages.getString("duration")</th>
    <th>$messages.getString("passed")</th>
    <th>$messages.getString("skipped")</th>
    <th>$messages.getString("failed")</th>
    <th>$messages.getString("knownDefects")</th>
    <th>$messages.getString("failedConfiguration")</th>
    <th>$messages.getString("passRate")</th>
  </tr>

  #foreach ($result in $suite.results)
  #set ($notPassedTests = $result.testContext.skippedTests.size() + $result.testContext.failedTests.size())
  #set ($total = $result.testContext.passedTests.size() + $notPassedTests)
  #set ($totalTests = $totalTests + $total)
  #set ($totalPassed = $totalPassed + $result.testContext.passedTests.size())
  #set ($totalKnownDefects = $totalKnownDefects + $utils.getKnownDefects($result.testContext.skippedTests).size())
  #set ($totalSkipped = $totalSkipped + $result.testContext.skippedTests.size() -$utils.getKnownDefects($result.testContext.skippedTests).size())
  #set ($totalFailed = $totalFailed + $result.testContext.failedTests.size())
  #set ($totalFailedConfigurations = $totalFailedConfigurations + $result.testContext.failedConfigurations.size())
  #set ($failuresExist = $result.testContext.failedTests.size()>0 || $result.testContext.failedConfigurations.size()>0)

  #if (($onlyReportFailures && $failuresExist) || (!$onlyReportFailures))
  <tr class="test">
   <td class="test">
    <a href="suite${suiteId}_test${velocityCount}_results.html">${result.testContext.name}</a>

    </td>
    <td class="duration">
      $utils.formatDuration($utils.getDuration($result.testContext))s
    </td>
    #if ($result.testContext.passedTests.size() > 0)=
    <td class="passed number">$result.testContext.passedTests.size()</td>
    #else
    <td class="zero number">0</td>
    #end

    #if ($result.testContext.skippedTests.size() - $utils.getKnownDefects($result.testContext.skippedTests).size() > 0)
    #set ($skipped = $result.testContext.skippedTests.size() - $utils.getKnownDefects($result.testContext.skippedTests).size())

    <td class="skipped number">$skipped</td>
    #else
    <td class="zero number">0</td>
    #end

    #if ($result.testContext.failedTests.size() > 0)
    <td class="failed number">$result.testContext.failedTests.size()</td>
    #else
    <td class="zero number">0</td>
    #end

    #if ($utils.getKnownDefects($result.testContext.skippedTests).size() > 0)
    <td class="knownDefects number">$utils.getKnownDefects($result.testContext.skippedTests).size()</td>
    #else
    <td class="zero number">0</td>
    #end

    #if ($result.testContext.failedConfigurations.size() > 0)
    <td class="failed number">$result.testContext.failedConfigurations.size()</td>
    #else
    <td class="zero number">0</td>
    #end

    <td class="passRate">
      #if ($total > 0)
      #set ($passRate = (($total - $notPassedTests) * 100 / $total))
      $passRate%
      #else
      $messages.getString("notApplicable")
      #end
    </td>

  </tr>
  #end
  #end

    <tr class="suite">
    <td colspan="2" class="totalLabel">$messages.getString("total")</td>

    #if ($totalPassed > 0)
    <td class="passed number">$totalPassed</td>
    #else
    <td class="zero number">0</td>
    #end
    #if ($totalSkipped > 0)
    <td class="skipped number">$totalSkipped</td>
    #else
    <td class="zero number">0</td>
    #end

    #if ($totalFailed > 0)
    <td class="failed number">$totalFailed</td>
    #else
    <td class="zero number">0</td>
    #end
  
    #if ($totalKnownDefects > 0)
    <td class="knownDefects number">$totalKnownDefects</td>
    #else
    <td class="zero number">0</td>
    #end 
    #if ($totalFailedConfigurations > 0)
    <td class="failed number">$totalFailedConfigurations</td>
    #else
    <td class="zero number">0</td>
    #end

    <td class="passRate suite">
      #if ($totalTests > 0)
      #set ($passRate = (($totalTests - $totalSkipped - $totalFailed -$totalKnownDefects) * 100 / $totalTests))
      $passRate%
      #else
      $messages.getString("notApplicable")
      #end
    </td>

  </tr>
</table>



Read More

Tuesday, September 17, 2013

Get Table Data

One of the most common tasks in our project is to retrieve data from a table in order to assert. With this post I will try to describe a unified way to get the required data from any table with a specific format so my assertions are well defined.
The assertion points to be well defined I usually prefer to have my actual data in the form of a Map(key,value) so my assertions are in the form
Assert.assertEquals(data.get(key),expected_value)
I selected the key value for my map to be the value of the first column with value a second map containing as keys the names of the columns and values the values of the columns.
Map(column_n_value:Map(column_2_name:column_2_value,…,column_n_name:column_n_value))
The aforementioned implementation for the key value pairs where chosen to be the table values instead of the table indexes for maintainability purposes. Maintainability wise a column addition or an non shorted table the index will return the wrong result while the value not.
The resulting assertions look like:
Assert.assertEquals(data.get(row_1_value).get(column_2_name),expected_value)
Assert.assertEquals(data.get(row_1_value).get(column_3_name),expected_value)
The first thing we need to do in order to construct our map is to get the number of rows and columns of the table as follows:
rows = selenium.getCssCount("css=table tbody tr").intValue()
columns = selenium.getCssCount("css=table tbody tr td").intValue()
With the number of rows and columns at hand the next step is to retrieve the names of the columns from the table header as follows in groovy:
public List<String> getTableColumnNames(){  
   def headerNames=[]  
   (1..selenium.getCssCount("css=table thead tr th").intValue()).each{columns->  
    if(!selenium.getText("css=table thead tr th" + ":nth-child(" + columns + ")").isEmpty()){  
      headerNames << selenium.getText("css=table thead tr th" + ":nth-child(" + columns + ")")  
    }
   return headerNames
}
Having the column names the next step is to construct the desired map as follows in groovy:
public HashMap<String, HashMap<String, String>> getTableInfo() {  
   selenium.waitForElement(componentName);  
   def TableMap=[:]    
   def columnNames = getTableColumnNames()  
   (1..selenium.getCssCount("css=table tbody tr").intValue()).each{row->  
     def columnMap=[:]
     (2..selenium.getCssCount("css=table tbody tr td").intValue()).each{column->  
      columnMap.put(columnNames[column-1],controller().getText(componentName + ":nth-child("+row+") *:nth-child("+column+")"))  
    }  
    TableMap.put(controller().getText(componentName + ":nth-child(" + row + ") td:nth-child(1)"),columnMap)  
   }  
   return TableMap;  
 }
The above implementation can be found embedded in Stevia, enriched with code detecting your locator style (Xpath, Css or Id).

The above implementation of the table scan could be altered to accept only td as columns by altering the column map to get
td:nth-child("+column+")
instead of
*nth-child("+column+")
Stevia includes similar methods such as: 
  • getTableInfoAsList 
  • getTableElements2DArray
  • getTableElementTextUnderHeader
Read More

Thursday, August 29, 2013

Regression Suites

In our agile project we utilized regression testing with test automation and continuous executions of our automated test scripts. This process served us well while the test execution lasted less than 4 hours but started to create major problems when the total number of test scripts increased and the execution time exceeded 1 day. A quick relief to our problem came with introduction of parallel execution of test scripts but still in cases were the management wanted a quick answer to “Do we deploy on production or not?” it wasn’t enough.

One way to design effective regression suites, to ensure the continuation of business functions, is to follow the rule:
Regression test suite!= sum(Functional test cases)
The purpose of the functional tests is to explore the behavior of specific business functions and highlight corner cases while the purpose of regression tests is to give an overall overview of the entire system. In this way the design of the regression suite should include test scripts inspecting the end-to-end business functions that a system encapsulates.

For the purpose of our project (agile environment) the regression suites were designed with the hypothesis that the GUI, boundary and GUI negative tests should be separated from the runtime tests because:
  1.      Changes in the code in boundaries are rare
  2.      GUI is indirectly tested (application its self is used for the preconditions of each test case)
This separation leads us automatically in designing different regression suites containing these special categories.

More over, a vertical separation according to execution frequency is needed. In an agile environment in a 2 weeks sprint, 4 sprints release circle, effective execution regression times could be at the end of each sprint, in every other sprint and at the end of the release. The schedule of each regression suite could be decided according to how often this part of the code is changed. For example a boundary test is highly unusual to change thus a planned execution time could be only once per release and preferably one sprint before the release sprint (latest spring usually reserved for sprint new features).



As mentioned before at the end of each sprint the new features suite should executed additively. For example in sprint 1 we have 5 features and in sprint 2 we have 3 features. The regression suite should have cases testing the 5 features (end of sprint 1) and cases 8 features (end of sprint 2)
Read More