MetaBoss Testing Framework Guide.
Regression Testing.
Overview
A regression test is used to answer questions like "Has system behaviour changed since
a certain time in the past?" and if it has then "What are these changes ?". Due to the very nature of
the regression testing (i.e. because it measures regression) it can not exist without a system test facility. In fact
regression testing can be thought of as "comparing results of two system test runs". This is why the preparation and running
of the regression tests is built on top of all the steps necessary to execute a standard system test. We only need to set up one
additional parameter - the reference to the specimen log file containing the results of prior test run.
As it is shown on the picture above, the execution of a regression test run always consists of three parts:
- Executing all required test cases against any number of enterprise systems. This is based on the scenario stored in some directory.
Similarly to the system test it involves checking preconditions before running the tests.
- Saving the resulting test scenario log file. Same as in the System test it involves checking of
various acceptance conditions and marking tests off as failed or succeeded.
- Comparing the content of the resulting log with the specimen log file and reporting the differences
Please note that it is imperative that the regression testing run starts from known data base state.
Whereas it is theoretically possible (albeit it requires more work) to prepare a set of system tests
which can run irrespective of the database state, with regression testing this is not possible.
This is because the regression test compares 100% of data contained in the specimen file with the
data contained in the test scenario log, so even small and innocent diferences in the data will lead to
regression test failure. The suggested approach here is to clean up and reinitialise the database prior to
every test run and then have a "Populate Data" test case, which can be run as a first test case.
The advantage of this approach is that the data population services themselves are also tested every time.
Specimen file description
As was mentioned above, the specimen file is a file which contains the historical
log record of the trusted test run. The trusted test run is simply some successful previous run of the
same system test scenario. In order not to cause many failure alarms when comparing
volatile elements, the regression test comparison has the following features:
- When checking for presence of child elements in the parent, the comparision may use "preciseMatch" policy or
"unorderedMatch" policy (Set as a special attribute of the element in the specimen). The default is "preciseMatch",
which means that the order and contents of all child elements in the specimen must match the sample.
The "unorderedMatch" means that only content must match and the order is ignored. This feature allows to ignore
the order of elements in the unordered collections (i.e. ignore the order where it is not important).
- If the comparision detected a mismatch when comparing strings in the specimen and in the log - it will try to
use the string in the specimen as a regular expression and check whether the string in the log matches it.
This allows one to verify the format of the volatile fields while ingnoring their content.
- Comparision takes place only in one direction, that is from specimen file to the log file.
In other words it ensures that that every bit of data (all elements , attributes namespaces etc) present
in the specimen file is also present in the test log file, but not other way around. Note that this is, in fact, true
to the spirit of regression testing - we are interested if any "old" service has changed its behaviour and
may potentially cause client systems to fail (New features are not subject of regression testing simply because they
are new!).
Based on these features of the comparison process, to convert the log file from trusted test run into a specimen
file one has to do the following:
- Identify all volatile fields and either remove them or replace them with regular expressions.
- Identify all element collections where order is not of importance and mark them with special attribute.
Note that it is not necesary to complete these steps upfront. They could be done incrementally as the response to
the regression test failures. More formally (and much more effectively) the transformation of the trusted log file into a specimen file
should be done with the use of XSLT stylesheets. In this case the XSLT stylesheet can be looked at as the
formal description of allowable variations between test runs.
The reliability and quality of the regression test very much depends on the
quality of the specimen file and we suggest that the specimen file candidate be
reviewed by testers, developers and designers prior to promoting the candidate specimen file to the final specimen.
Questions which should be answered by this review are:
- Does the underlying test scenario (and therefore the specimen log) cover enough test cases for it to be
a comprehensive regression test ?
- Does the specimen log contain valid results ? Particularly close attention should be paid to fields
which are not verified during test case acceptance. These are normally some "not so important" operation output fields
which are left out from the automatic test acceptance procedure (eg. due to the lack of time to program comprehensive
acceptance procedure). Increased attention during reviews is required because prior to the review no one really knows if these
fields have expected values.
- Have all volatile fields been removed from the specimen or their value replaced by a regular expression ?
All fields where their value may change with time must be either removed or their value replaced by
a regular expression in the specimen log. If this is not done - the regression test will detect pseudo failures
and fail.
|