Thursday, March 8, 2007

Component Testing

What does it means when someone says component testing? The short answer is, it is anything you want it to be. It is the component testing perspective that is important and not the size of the pieces being tested. That perspective views the software being tested as intended for integration with other pieces rather than as a complete system in itself. This both helps to determine what features of the software are tested and how they are tested. One of the most intense arguments in testing object-oriented systems is whether detailed component testing is worth the effort. That leads to state an obvious axiom: Select a component for testing when the penalty for the component not working is greater than the effort required to test it.
It is the component testing perspective that is important and not the size of the pieces being tested. That perspective views the software being tested as intended for integration with other pieces rather than as a complete system in itself. This both helps to determine what features of the software are tested and how they are tested.

Which ones should we test?:
There are several situations in which the individual classes should be tested regardless of their size or complexity:
Reusable components - Components intended for reuse should be tested over a wider range of values than a component intended for a single focused use.
Domain components - Components that represent significant domain concepts should be tested both for correctness and for the faithfulness of the representation.
Commercial components - Components that will be sold as individual products should be tested not only as reusable components but also as potential sources of liability.


How thoroughly do we test?:
Before to answer the question of how thoroughly to test, the summary of some of what was presented in last months column[McGregor, 1997b] is added to the content. Risk analysis was applied to the task of identifying which parts of the system to test more intensely than the rest. An analysis was conducted on the requirements to determine potential business and technical risks for the development process. This analysis then mapped the risks identified at the requirements level onto individual use cases. Each use case was assigned a risk classification. All of the use cases within a category were tested to the same level of coverage. This same technique can be applied to the component level. That is, the risk classification of the use cases can be mapped onto the components. Thus not all components will be tested to the same coverage level just as not all use cases were tested to the same level.
I have already mentioned one criterion that could be used in a component-level risk analysis: whether the component is intended for reuse. The increased risk comes from the expectation that the component must respond correctly to a much wider ranger of inputs. Other criteria include the language features required for implementation of the component (using a relatively new feature such as exceptions in C++ is a higher risk), the complexity of the specification and the maturity of the development environment including the tools and the personnel.
The technique for identifying use cases that should be tested more thoroughly can be applied to the components that represent the concepts from the domain being manipulated in the use case. Once the use cases have been classified according to risk, the domain components referenced in each use case can be assigned the same risk classification as that of the use case. Of course it is seldom that simple. Since a domain object may participate in more than one use case, the risk categories for all of the use cases that reference that component must be combined to compute the risk value for the component.