MetaBoss
User Guides
Synopsis
Beginner's Guide
Configuration Guide
Design Studio Guide
Programming Model Guide
Testing Framework Guide
'How To ...' Guides
References
Synopsis
Design Model Reference
Design Model UML Profile
Test Schema Reference
MetaBoss design library
Run-time libraries API
Dev-time libraries API
HatMaker Example
Synopsis
Enterprise Model Description
Enterprise Model Reference
SystemTest Example
WebServices Example
Miscellaneous
About MetaBoss
Quality Assurance
Compatibility Notes
Acknowledgments
Glossary
Change Log
Version 1.4.0001
Built on 15 Dec 2005 22:31:47

MetaBoss Beginner's Guide

Description of the testing process

Overview

In systems built in accordance with Service Oriented Architecture concepts (and our Course Registration System is the fine example of such system) the testing process essentially comes down to making series of service calls against the system, which is in the known state, and comparing the actual result with the expected one. From this definition we can easily see what do we need to do:

  1. Bring the system in to the known state
  2. Plan and execute series of service calls
  3. Record the result, analyse it and compare with the expected one

In the MetaBoss testing methodology, the test procedure is run via Apache Ant and therefore the master file controlling an execution of the test procedure is the Ant file. Ant is very flexible and feature rich batch processor. Some tasks we use during testing are offered by Ant out-of-the-box. For example the SQL task allows to execute database instructions from within Ant script. We use it to automatically initialise the databases prior to test runs. In our example the test master file is located at ${metaboss_home}/examples/AlmaMater/Test/test.xml.

The framework is described in more detail in the MetaBoss Testing Framework Guide. This chapter will explain in detail what steps are involved in testing our example using MetaBoss Testing Framework.

Index

The 'Find Free Teachers' test case description
The 'Find Student Acquaintances' test case description
Bringing the system into the known state
Executing the test cases
Running the test and interpreting the results
A few words on automatic test validation


The 'Find Free Teachers' test case description

We will now have a look at the findFreeTeachers operation from the MiscellaneousQueries service. This operation is a query, it has no inputs and returns list of Teachers who is not Lecturer and not a Supervisor for any Course. Absence of the inputs makes this test a very simple one since there are less permutations we have to test for (ie. we do not need to call the operation many times with different inputs).

In order to test the main functionality of this query we need to test if it returns all free Teachers and does not return any of the allocated Teachers. To do that we need to call it against the system populated with the dataset which has Teachers with following characteristics:

  • One or more Teachers who are not a Supervisor and not a Lecturer for any Course. All these Teachers should be returned in this operation

  • One or more Teachers who are Supervisors for one or more Courses, but not Lecturer for any Course. All these Teachers should not be returned in this operation

  • One or more Teachers who are Lecturers for one or more Courses, but not Supervisor for any Course. All these Teachers should not be returned in this operation

  • One or more Teachers who are Lecturers for one or more Courses as well as Supervisors for one or more Courses. All these Teachers should not be returned in this operation

The above list of the required features in the dataset is by no means complete. In real life we may also wish to test some more boundary conditions, such as "No free Teachers" or "No allocated Teachers". The dataset will have to be more complex to accomodate all possible scenarios. However, we believe that what we have described is is enough for the purpose of this example.

The 'Find Student Acquaintances' test case description

We will now have a look at the findStudentAcquaintances operation from the MiscellaneousQueries service. This operation is a query, it requires Student Identification as an input and returns list of Teachers and Students who might be familiar with the given Student. Familiar Teachers are the ones who are Supervisors or Lecturers on one or more Courses the Student is enrolled into. Familiar Students are the ones who are enrolled in one or more same Courses the Student is enrolled into.

This test case is not as simple as the previous one we have looked at. This is in part due to the complexity in the input Student Identification field. Students are identified by either InstanceId field or StudentNo field of the input StudentKey structure. (Every entity in the system has autogenerated InstanceId primary key field. In addition some entities have natural primary key fields. The entity Key structure contains all fields by which entity can be uniquely identified). All of this means that we will have to test invocation with the valid and invalid InstanceId and StudentNo input fields.

In order to test the main functionality of this query we need to test if it returns all associated Teachers and Students and does not return any other ones. To do that we need to call it against the system populated with the dataset with Students having following characteristics:

  • One or more Students who are not enrolled into any Course. No acquaintances should be returned for these Students.

  • One or more Students who are enrolled into Courses where there are no other Students but there is a Lecturer and a Supervisor. All associated Teachers should be returned, but the list of Students must be empty.

  • One or more Students who are enrolled into Courses where there are some other Students and there is a Lecturer and a Supervisor. All associated Students and Teachers should be returned in this operation.

The above list of the required features in the dataset is by no means complete. The dataset will have to be more complex to accomodate all possible scenarios. However, we believe that what we have described is is enough for the purpose of this example.

Bringing the system into the known state

The middleware we have created is stateless, which means that it does not keep in-memory state between service invocations (in other words it does not maintain client conversation context). This means that in order to bring our system to known state we only need to initialise underlying database with known dataset. The database is reinitialised in three steps:

  • Cleanup the database using generated SQL scripts
    <sql  driver="${dbserver.driver.class}"
             url="${domain.url}"
          userid="${domain.userid}"
        password="${domain.password}"
         onerror="continue">
      <classpath>
        <pathelement path="${dbserver.driver.classpath}"/>
      </classpath>
      <fileset dir="${dbscripts_dir}/${dbmodel}">
        <include name="DBScript_AlmaMater_CRS_Courses_DeleteAll.sql"/>
      </fileset>
    </sql>
    
    

    This step uses standard Ant task which allows to execute SQL scripts against the database. In this case we run the ${metaboss_home}/examples/AlmaMater/Release/dbscripts/mysql/DBScript_AlmaMater_CRS_Courses_DeleteAll.sql SQL script which was generated by MetaBoss during build. The script deletes all database objects in the order appropriate for the deletion. Note that the 'onerror' attribute is set to continue. This is to cater for cases when objects being deleted are not found in the database. This situation may arise when the script is used to clean the database which was not created for the same version of the same system (eg. database is empty or contains previous version of the same domain or even some other domain).

  • Initilise the database using generated SQL scripts
    <sql  driver="${dbserver.driver.class}"
             url="${domain.url}"
          userid="${domain.userid}"
        password="${domain.password}">
      <classpath>
        <pathelement path="${dbserver.driver.classpath}"/>
      </classpath>
      <fileset dir="${dbscripts_dir}/${dbmodel}">
        <include name="DBScript_AlmaMater_CRS_Courses_CreateFromScratch.sql"/>
      </fileset>
      <fileset dir="${dbscripts_dir}/${dbmodel}">
        <include name="DBScript_AlmaMater_CRS_Courses_DeleteAll.sql"/>
      </fileset>
      <fileset dir="${dbscripts_dir}/${dbmodel}">
        <include name="DBScript_AlmaMater_CRS_Courses_CreateFromScratch.sql"/>
      </fileset>
    </sql>
    
    

    This step too uses standard Ant task which allows to execute SQL scripts against the database. Only here we run the ${metaboss_home}/examples/AlmaMater/Release/dbscripts/mysql/DBScript_AlmaMater_CRS_Courses_CreateFromScratch.sql SQL script which was generated by MetaBoss during build. The script creates all database objects in the order appropriate for the creation. Note that the 'onerror' attribute is not set, which means that the failure to execute SQL script will result in the whole test procedure being aborted (obviously if we are unable to initialise the database - pretty much all other tests will fail). Also note that we run the creation script, then the deletion script and then the creation one again. Strictly speaking this is not necessary. We do it to test the deletion script (remember that the database cleanup step ignores script errors, so it is not a valid test of the deletion script).

  • Populate the database using generated maintenance test scripts
    <MetaBossScenarioRunner scenarioname="AlmaMaterCRSSystemTest"
                                runname="populateDatabase"
                           scenariopath="${release_dir}/maintenance/populateall"
                            includepath="${maintenancedata_dir}"
                                 logdir="${testlogs_dir}"
                           classpathref="classpath.inprocess"/>
    
    

    This step uses the MetaBoss Scenario Runner utility which allows to execute series of service operations defined in XML scripts. In this case we run all the scripts from the ${metaboss_home}/examples/AlmaMater/Release/maintenance/populateall directory. All of the scripts in this directory are generated by MetaBoss during build and look something like the one below:

    <TestCasePlan xmlns="http://www.metaboss.com/XMLSchemas/MetaBoss/SdlcTools/SystemTester/1.0"
                  xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
                  description="Create all instances of Student entities specified in the external document">
      <ParametersDocument>
        <Include filename="AlmaMater.CRS.Courses.StudentEntityCreationDocument.xml"/>
      </ParametersDocument>
      <Template>
        <xsl:stylesheet xmlns:target="AlmaMater/CRS/CoursesDomainSupport" version="1.0">
          <xsl:template match="/">
            <xsl:for-each select="/TestCaseLog/ParametersDocument/StudentList/Student">
              <TestCasePlan name="Create New Student">
                <OperationPlan>
                  <InputDocument>
                    <DataManagement.CreateStudentInput xmlns="AlmaMater/CRS/CoursesDomainSupport">
                      <ProposedDetails>
                        <xsl:copy-of select="*"/>		          
                      </ProposedDetails>
                    </DataManagement.CreateStudentInput>
                 </InputDocument>
               </OperationPlan>
             </TestCasePlan>
           </xsl:for-each>
         </xsl:template>
       </xsl:stylesheet>
      </Template>
    </TestCasePlan>
    
    

    These scripts invoke automatically generated (and automatically modelled) domain administration services. MetaBoss Scenario Runner will execute these scripts in the ascending alphabetical order. The generator took care of the file names (this is why the file names have numbers in them) to ensure that the parent entities are created before the child ones. Every one of these scripts has an include command in it. This command includes the data file (also XML) which contains entity data to be inserted in the data base. The datafiles are not generated - they are prepared by human based on the templates generated by MetaBoss. The generated templates for the datafiles are located in the ${metaboss_home}/examples/AlmaMater/Release/maintenance/datatemplates directory and the actual datafiles are located in the ${metaboss_home}/examples/AlmaMater/Test/maintenancedata directory.

    We will now take a moment and look closer at these datafiles. There are two kinds of them - Entity creation documents and Association creation documents. Entity creation documents contain list of entity definitions in XML similar to the following:

    <StudentList>
      <Student>
        <StudentNo>1123</StudentNo> 
        <FirstName>Robert</FirstName> 
        <SecondName>Alfred</SecondName> 
        <FamilyName>Smith</FamilyName> 
        <DateOfBirth>20/12/1980</DateOfBirth> 
      </Student>
      <Student>
        <StudentNo>1534</StudentNo> 
        <FirstName>Keneth</FirstName> 
        <SecondName/> 
        <FamilyName>Brown</FamilyName> 
        <DateOfBirth>12/04/1980</DateOfBirth> 
      </Student>
    </StudentList>
    
    

    Note how all attribute names are taken straight from the model. The actual attribute values are of course dictated by the needs of the Test Case we are preparing the dataset for. For the one-to-many and one-to-one type associations, the identifier of the associated entity appears as an entity attribute. In the example below the Course entity has a SupervisorInstanceId attribute which should point to the associated Teacher entity:

    <CourseList>
      <Course>
        <Code>CR01</Code> 
        <Name>Engineering</Name> 
        <SupervisorInstanceId>3-1-rs</SupervisorInstanceId> 
      </Course>
      <Course>
        <Code>CR02</Code> 
        <Name>Literature</Name> 
        <SupervisorInstanceId>0-1-rs</SupervisorInstanceId> 
      </Course>
    </CourseList>
    
    

    In order for us to populate the SupervisorInstanceId attribute during Course entity creation, the Teacher entity must already exist on the system. This is because the model stipulates that the Course has one and only one Supervisor, hence Course record can not exist without valid Supervisor reference. The entity instance identifiers are generated automatically and can not be easily predicted. However, the values are not random and there is a guarantee that if you run same creation process starting from the freshly initialised database - the instance identifiers will be the same on every run. This is why the process of preparing the test data has a few iterations in it: Create Entity data file for Teacher entities, then run population process and note the Instance Ids of the created Teacher entities, then Create Entity data file for Course entities using the Teacher instance ids, run population process again and so on...

    The Association creation documents are used to create many-to-many associations between entities. The many-to-many types of associations can not be represented by attribute inside the entity, they have to be created as a separate step after all entities have been created. The Association creation document only lists identifiers of the parent Entitity and all associated child Entities as follows:

    <StudentCoursesList>
      <StudentCourses>
        <StudentKey>
          <InstanceId>0-1-rq</InstanceId> 
        </StudentKey>
        <CourseKeys>
          <CourseKeys.Element>
            <InstanceId>0-1-rr</InstanceId> 
          </CourseKeys.Element>
          <CourseKeys.Element>
            <InstanceId>1-1-rr</InstanceId> 
          </CourseKeys.Element>
        </CourseKeys>
      </StudentCourses>
      <StudentCourses>
        <StudentKey>
          <InstanceId>1-1-rq</InstanceId> 
        </StudentKey>
        <CourseKeys>
          <CourseKeys.Element>
            <InstanceId>1-1-rr</InstanceId> 
          </CourseKeys.Element>
        <CourseKeys.Element>
          <InstanceId>2-1-rr</InstanceId> 
        </CourseKeys.Element>
      </CourseKeys>
      </StudentCourses>
    </StudentCoursesList>
    
    

    Similarly to one-to-one and one-to-many associations, the entities in the above example are identified by the autogenerated identifiers. The values for these identifiers must be obtained at the preceeding iteration from the Entity creation process.

Executing the test cases

After the system is brouhgt to the known state, it is time to execute the test cases. Similarly to the dataset population step, this one uses the MetaBoss Scenario Runner utility which allows to execute series of service operations defined in XML scripts. The Ant task which performs the test looks like this:

<MetaBossScenarioRunner scenarioname="AlmaMaterCRSSystemTest"
                             runname="testQueries"
                        scenariopath="${testscenario_dir}"
                              logdir="${testlogs_dir}"
                        classpathref="classpath.inprocess"/>

In this instance we run all the scripts from the ${metaboss_home}/examples/AlmaMater/Test/testscenario directory. Again, the ScenarioRunner will execute all test case script files found in this directory in the ascending alphabetical order. We have have found that the easiest way to control the order of execution is to use numeric prefix in the test case file names.

We will now take a moment and look closer inside these scripts. As it was mentioned earlier, the scripts are XML files created in accordance with the MetaBoss ScenarioRunner XML schema in conjunction with automatically generated servicemodule schemas. You can learn more about ScenarioRunner XML schema here. The servicemodule schemas are generated as part of the domelements adapter source. Since we have two servicemodules in our system, we have two schemas. The CoursesDomainSupport servicemodule schema is located in com/almamater/crs/adapters/coursesdomainsupport/generic/domelements/CoursesDomainSupport.xsd and the Reporting servicemodule schema is located in com/almamater/crs/adapters/reporting/generic/domelements/Reporting.xsd. The fact that ScenarioRunner validates the Test cases against the schemas generated from the model means that all non backwards compatible changes to the model (and therefore to the schemas) will make affected test cases invalid and they wiil have to be fixed. We have found that this produces good 'early warning' signal and forces designers and developers to ensure backwards compatibility or pay the price for breaking it.

All of the scripts in this directory are prepared by hand and look similar to the one below:

<TestCasePlan xmlns="http://www.metaboss.com/XMLSchemas/MetaBoss/SdlcTools/SystemTester/1.0"
              xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
              description="Test find free teachers service call from the Crs / Reporting functionality">
  <TestCasePlan name="FindStudentAcquaintancesWithStudentNumber"
        description="Executes the service with student number as the unique student identifier">
    <OperationPlan>
      <InputDocument>
        <MiscellaneousQueries.FindStudentAcquaintancesInput xmlns="AlmaMater/CRS/Reporting">
          <Student>
            <StudentNo xmlns="AlmaMater/CRS/CoursesDomainSupport">1123</StudentNo>
          </Student>
        </MiscellaneousQueries.FindStudentAcquaintancesInput>
      </InputDocument>
    </OperationPlan>
  </TestCasePlan>
  <TestCasePlan name="FindStudentAcquaintancesWithStudentInstanceId"
        description="Executes the service with instance id as the unique student identifier">
    <OperationPlan>
      <InputDocument>
        <MiscellaneousQueries.FindStudentAcquaintancesInput xmlns="AlmaMater/CRS/Reporting">
          <Student>
            <InstanceId xmlns="AlmaMater/CRS/CoursesDomainSupport">1-1-rq</InstanceId>
          </Student>
        </MiscellaneousQueries.FindStudentAcquaintancesInput>
      </InputDocument>
    </OperationPlan>
  </TestCasePlan>
</TestCasePlan>

The above script executes findStudentAcquaintances operation twice. The first invocation uses StudentNo attribute to identify the Student we are interested in (StudentNo attribute is declared as natural primary key in the model therefore it can be used to identify instances). The second invocation uses entity Instance Id to identify the Student we are interested in. Note how the whole document is declared as being governed by ScenarioRunner schema and some elements inside the document are declared as instances of the elemnts defined in CoursesDomainSupport or Reporting schemas.

Running the test and interpreting the results

The directory ${metaboss_home}/examples/AlmaMater/Test contains the runtest.bat Windows batch file which invokes Ant and runs the test procedure described in this chapter. The files test.properties and domainconfig.xml in the same directory contain database connection information. If you wish to run these tests yourself you will have to modify database server details (computer name and port number) to point to your own MySQL installation. If your database name is not 'almamater_crs_courses' it will also have to be modified (Note that the database itself must be created manually - Ant script does not contain the task to do that).

Why do we have two files with practically the same information? The test.properties file defines And properties. This file is included in the Ant script and the properties are used when 'sql' task is invoked (The task used to cleanup and initialise the database). The domainconfig.xml file is used by Tyrex transaction management framework. Tyrex is used by MetaBoss systems when they are deployed 'inprocess' (ie. outside J2EE container). It provides standalone implementation of the JTA (Java Transaction Architecture) mechanism. In other words test.properties used to initialise Ant whereas domainconfig.xml is used to initialise middleware.

Upon completion of the test run, two log files are created in the ${metaboss_home}/examples/AlmaMater/Test/testlogs directory. The reason for having two log files is because we have invoked ScenarioRunner twice: first time to populate the database and second time to execute test scenario. Have a look at these log files. They list each operation which was invoked together with inputs to it and outputs from it. They certainly contain enough information for us to study the results and see if operations have functioned correctly (we know the dataset and the inputs, so it should not be a problem to validate the outputs).

A few words on automatic test validation

Automatic test validation is the facility which allows to automate validation of the test case and not to rely 100% on human eyes. The automatic test validation falls outside the scope of this simple example. However it seems appropriate to mention them here. The MetaBoss Testing Framework offers two facilities for automation of the test case validation:

  • Acceptance criteria specification

    It is possible to specify acceptance criteria as part of the test case specification. The criteria is specified inside Test Case definition file (in XML) and can range from very simple (eg. ensure that the operation output field has expected value) to quite complex set of conditions. Scenario Runner automatically verifies all acceptance criterias after every operation invocation and marks the test case as failed or successful. This method is very formal, which is good, but the price is that it increases the effort required to write the Test Case.

  • Regression testing against trusted log file specimen

    After test scenario run has succeeded and the resulting log file has been inspected and found healthy, it is possible to save this log file and use it as a specimen in future testing. If specimen log is specified, ScenarioRunner will automatically compare the current log file with the saved specimen log file. Each and every discrepancy will be reported and, if any difference is detected, the test will fail. This facility is somewhat less formal than the previous one, but it is also quicker to setup. The only caveat is that the quality of the specimen file must be high (ie. ScenarioRunner does not know that the specimen is wrong or right - you have to ensure yourself that the specimen log is correct).

The good news is that both of theese facilities can be used simultaneously. If you wish to learn more about these features, please refer to the more comprehensive HatMaker example.