Automated TestingAll the important business operations for CST are encapsulated within two APIs that may be tested programmatically:
All automated test cases have names beginning with
TestCase and are found in the package
cst.test. They are subclasses of
AbstractCSTTestCase, which creates common objects used in test cases such as user identities and examples of subjects.
The abstract class also obtains references to instances of both logging and admin services from
CSTTestSuite. The test suite class can be set so that it will either return demonstration or production versions of
the services. By setting the flag
CSTTestSuite, developers can apply the entire collection of test cases to either
in-memory or MySQL implementations of services.
Most public methods in the TestCase classes represent the activity of a single test. Method names follow this convention:
testDateTypeValuePresentN2is an example of a method name which follows this convention.
In the naming convention, "N" denotes a test for normal behaviour, "A" denotes a test for unusual but acceptable behaviour and "E" denotes a test for an error condition. Examples of normal behaviour include:
- search for a subject which exists
- search for a subject that does not exist.
- filter subjects based on an attribute value which will yield multiple results.
Examples of abnormal behaviour include:
- filter subjects based on an attribute value which will yield one result.
Examples of error behaviour include:
- filtering subjects based on a non-existent attribute value.
- retrieving data for a non-existent activity or activity step.
- attempting to update an activity record for a non-existent subject.
The examples show that the classification of different behaviours can be subjective. For example, it may be considered normal that end-users would look for a non-existent subject if they are typing in a long unique identifier. It could also be considered an error if the tool tracked a small cohort marked by short simple identifiers.
Another area of subjectivity in creating the test cases is deciding whether they would happen or not. For example, when users use the Logging Tool, there is no way they would be able to either update an activity for a non-existent subject or filter subjects based on a non-existent search attribute value.
However, the test suite was designed to anticipate that they would be used in future by other front-ends and software clients whose design wouldn't prevent these erroneous requests from being submitted to an API. All validation routines exist within the implementation of the APIs, not the GUI front ends which interact with them.
Each test case has setup and tear-down methods which ensure that the JUnit test framework can run the tests in any order.
Author: Kevin Garwood