Institute for Geoinformatics
University of Muenster
Andrew U. Frank
Dept. of Geoinformation
Technical University Vienna
The problem with formal specifications is that formal tools can only check that the specifications are internally consistent. No method is conceivable to automatically check that the specification expresses the intentions of the (collective) mind of the specification body, i.e., that it is correct. However, humans can easily detect if the result of an operation corresponds to their expectations. The idea of this paper is to show how this human capability can be exploited in the writing and testing of specifications for interoperable GIS.
A problem related to that of deciding the correctness of specifications is that of checking the compliance of an implementation with a specification. It is practically (though not fundamentally) impossible to prove the compliance of a program with a specification, but one can test (with a limited set of examples) if an implementation produces no results prohibited by the specification.
The same solution applies to both cases: if specifications are written
in an executable language, they can be run and the results of specific
test cases can be directly inspected. The results can be used to demonstrate
and better understand what the specification means, and to check compliance
of implementations. The paper applies this idea to the specification of
semantics for interoperable components. It shows how the use of a functional
language for specification writing (as explained in the companion paper,
Frank and Kuhn) achieves the goal of unifying the specification and testing
Today’s specifications formally describe the signatures of operations, assuming an implicit type system. The semantics of operations are typically expressed using first-order logic or natural language. Such expressions often assume the semantics of analytical geometry which are not supported by straight-forward implementations. They are difficult to combine and to understand in their collective meaning. It is not unusual that there is no implementation (i.e. no model) satisfying all of them. Operations from analytical geometry (e.g., line intersection or point equality) are notorious examples.
Faced with these difficulties, current specification practice follows a three-step process:
Proof theory is the appropriate theoretical framework to discuss specifications and to improve the process:
If specifications can be executed, they can serve as prototypes for an implementation. This has tremendous advantages for the specification process, by moving the testing phase up to the specification writing phase and allowing for early error recognition and rectification.
Writing specifications in an executable language, however, reduces them
automatically to constructive expressions (of the kind of rewrite rules).
This is a serious restriction, but it can be circumvented by expressing
what cannot be stated in the theory as (constructive) test functions and
supplementing the theory with a simplistic execution model. This model
is not optimized for effective execution, but can demonstrate that an implementation
is possible and support human understanding of what is going on in the
Executable specifications allow to compute results for any input data.
They allow the specifier to observe if her intuition about what the specification
should say corresponds to what the specification actually says. The specification
can easily be distributed, so that any interested parties can submit test
cases and convince themselves that the specification matches the collective
understanding of the specification group.
Outcomes for test cases can be computed with an executable specification. These results can then be compared with the output for the same test cases from the implementation to be tested. An automated compliance test system can thereby be designed.
If one has a hypothesis about a ‘wrong’ implementation, it is possible
to produce specific test cases where the outcome from a ‘wrong’ implementation
would be detectably different.
Jones, M. P. (1991). An Introduction to Gofer, Yale University.
OGC (1997). A compliance testing strategy. The Open GIS Consortium, Document 97-007. http://www.opengis.org.
Peterson, J., K. Hammond, et al. (1996). Report on the functional programming language Haskell, Version 1.3. http://haskell.cs.yale.edu/haskell-report/haskell-report.html