General

Download

Overview
Project History
People

Use Cases Scenarios
Use Case Requirements
Bugs and Known Limitations
LHA

User Manuals Getting Started
General Tour
Configuration Options
Logging Tool Tour
Administration Tool Tour

Developers Architecture
Analysis of Design
Creating Plugins
Future Enhancements

Automated Testing

All the important business operations for CST are encapsulated within two APIs that may be tested programmatically:

All automated test cases have names beginning with TestCase and are found in the package cst.test. They are subclasses of AbstractCSTTestCase, which creates common objects used in test cases such as user identities and examples of subjects. The abstract class also obtains references to instances of both logging and admin services from CSTTestSuite. The test suite class can be set so that it will either return demonstration or production versions of the services. By setting the flag USE_DEMO in CSTTestSuite, developers can apply the entire collection of test cases to either in-memory or MySQL implementations of services.

Most public methods in the TestCase classes represent the activity of a single test. Method names follow this convention:

test[Feature Description][N,A,E][Number]
. testDateTypeValuePresentN2 is an example of a method name which follows this convention.

In the naming convention, "N" denotes a test for normal behaviour, "A" denotes a test for unusual but acceptable behaviour and "E" denotes a test for an error condition. Examples of normal behaviour include:

Examples of abnormal behaviour include:

Examples of error behaviour include:

The examples show that the classification of different behaviours can be subjective. For example, it may be considered normal that end-users would look for a non-existent subject if they are typing in a long unique identifier. It could also be considered an error if the tool tracked a small cohort marked by short simple identifiers.

Another area of subjectivity in creating the test cases is deciding whether they would happen or not. For example, when users use the Logging Tool, there is no way they would be able to either update an activity for a non-existent subject or filter subjects based on a non-existent search attribute value.

However, the test suite was designed to anticipate that they would be used in future by other front-ends and software clients whose design wouldn't prevent these erroneous requests from being submitted to an API. All validation routines exist within the implementation of the APIs, not the GUI front ends which interact with them.

Each test case has setup and tear-down methods which ensure that the JUnit test framework can run the tests in any order.

Author: Kevin Garwood


(c)2010 Medical Research Council. Licensed under Apache 2.0.