General

Download

Overview
Project History
People

Use Cases Scenarios
Use Case Requirements
Bugs and Known Limitations
LHA

User Manuals Getting Started
General Tour
Configuration Options
Logging Tool Tour
Administration Tool Tour

Developers Architecture
Analysis of Design
Creating Plugins
Future Enhancements

Design for Testability

Develop an API to isolate presentation code from code use to support business operations and data persistence

The application uses a common three-level architecture that organises code into separate layers which address presentation, business logic and data persistence. Code in the presentation layer is responsible for interpreting marshalling form data and form requests into a form that can be understood by the rest of the software. Parts of the user interface make calls to an API whose methods are expressed in terms of concepts defined in the business logic layer. Changes made to the data repository are managed by code that exists in the persistence layer.

Organising code into well-separated layers promotes testability in two respects:

As an example of the first case, consider the classes that are defined in the business layer. Most classes have fields and operations which relate to some domain business concept that would be meaningful to an end-user. None of the classes are concerned with either the way their instances are stored on disk or with the way they are presented to end users. They are mainly concerned with tasks of validating fields and with detecting differences in field values between two objects. Testing these features does not require code in either of the presentation or persistence layers and therefore limits the possible sources of error in the test suite.

Substituting code in different layers can also be useful in test cases. Because code in the presentation layer doesn’t directly interact with code in the persistence layer, the user interface can be made to interact with a data repository that is implemented as a hard-coded set of records. When presentation features are tested, testers will assume the mock code works because it is supposed to be much simpler than code used in a live relational database. If errors occur, developers can probably conclude the errors occur either in the business or presentation layer. The separation of layers allows developers to use substitution to help pinpoint the location of problems in the code.

Code features in the persistence and business layers may be exercised without human intervention. They can be exercised with test cases that are run by testing programs such as JUnit. Although these two architecture layers are well-separated, it is desirable to encapsulate their combined code using an API that is used by the presentation layer. The presence of the API helps testability in two ways:

The former benefit helps scope the concern of test suites and the latter benefit helps speed up the testing effort by fostering automated testing.

Make the presentation code interact with an API that is expressed as an interface, not as a concrete class.

The API can be expressed as a set of methods that are available through a specific class. The class can hide its contents from the client user interface code but still tie test cases to a particular implementation of business and persistence code. It is easier to substitute implementations of the API with mock code if the client code assumes it is dealing with an abstract interface rather than a concrete class. The use of an interface also enforces rather than merely encourages a separation of concerns in the layers of the architecture. By using a mock code implementation of the API, the source of errors in failed test can be limited to the presentation layer.

Establish exception handling policy

Java allows code to either throw or catch a checked exception. Throwing an exception often simplifies a method by delegating the task of dealing with errors to the calling code. Catching an exception in a method complicates the implementation of a method but simplifies the calling code.

Use custom checked exceptions that include useful messages for people and error codes for testing programs.

CSTException is a checked exception which has the following features:

Executing an API call may result in one or more errors occurring. If people encounter errors through the course of using the Logging Tool forms, they may be shown multiple errors in an effort to speed up the data submission process. The informative message exists for the benefit of people. It provides both an intelligent error message for end-users and hides auto-generated error messages which may contain sensitive information.

The use of the error codes and counting methods that filter by error code allow CST to support detailed automated testing. Without them, a test case could merely notify a user that an error occurred. With them, a test case can determine whether any errors were generated, and the number of errors produced for a given error code. Within API calls, validate all parameters before executing the main business logic The methods that appear in an API make assumptions about the kinds of values that may appear in their actual parameters. If the client code violates an assumption by passing an illegal parameter value to the API, then the outcome of the test case may become uncertain. In automated test suites, the best outcome in this situation would be that the CST service encounters an error, throws an exception and fails its test case. This event may obscure the presence of other flawed code but at least testers can be notified that there is a problem. The worst outcome would be that the test case passes but the erroneous actual parameter value causes an error that is not detected.

By validating all parameter values before a business operation occurs, CST helps limit the sources of error that occur within the main body of code responsible for carrying out the main task of the API method call.

Develop test features which can reset the data repository to a known state.

The most important property of a test case is that it can produce repeatable results. Given certain inputs the test case should generate the same outputs each time it is run. The outcome of the current test case should not be influenced by any test cases which come before or after it. In CST, most of the test cases assume that the database begins in an empty state. Before the body of each automated test case is executed, the data repository is cleared, and shared resources such as database connections are recovered. These actions allow test cases to eliminate other test cases as possible sources of error.

Maintain automated test suites

Several automated test suites have been developed for CST. They can be run using either demonstration or production versions of the data repository. Maintaining the test cases presents an overhead cost for ongoing development. During the first time test suites are run, it may be that the effort needed to test software manually is the same effort needed to build and run an automated test suite. Most of the cost savings gained through automated testing don’t happen in the first test run. They happen over the course of all subsequent test runs which are applied to later software releases. Although some effort is needed to build test cases for new features, many of the test cases that are applied to existing features will already be available to use.

Author: Kevin Garwood


(c)2010 Medical Research Council. Licensed under Apache 2.0.