In this discussion I will replace "decision-making" with the term "planning". The idea of planning has strong spatial connotations, and implies a kind of thoughtful premeditation which precludes any abrupt and precipitate rush to conclusions. In government, business, and military affairs planning is more deliberate and enjoys more careful consideration than decision-making in the field of battle. The closer that decision is to action, the less it is apt to be well-served by collaboration.
Collaborative efforts serve a number of purposes. They seek to accommodate different interests by including them in the decision process; this may lead to contention rather than cooperation. They seek through consensus to bind diverse parties (often subordinates) to an agreed conclusion. They seek through collective review to avoid egregious mistakes. Most of the gains which I will consider involve capturing from diverse participants the benefits of wider experience and varied personalities and outlooks. These benefits include improved definitions of goals and their measurement, better knowledge of existing conditions, more correct estimation of the relationships and behaviors which will influence the evolution of the system being planned, and different images of possible future actions and desired arrangements.
The need to define goals, conditions, behaviors, and alternatives thus influences the design of decision support systems. By definition, if the effort is collaborative, the support system cannot already contain satisfactory definitions of all these factors--else there would be no gain from collaboration.
The closest that a support system can approach to replacing the collaboration is in providing a good information base about current conditions, including resources. Even this will be subject to criticism by participants in the collaboration, on grounds of insufficiency, inaccuracy, lack of detail, and so on.
Goals for planning imply purpose, but in general a computer system cannot be purposeful, normative, or prescriptive. If it is, its usefulness is confined to those groups which share the built-in set of purposes. (Optimizing models present the difficulty that the objectives or goals may be hidden, or may be manifestly be too narrow to correspond with the goals of the users, or may diverge from them.)
Behaviors are poorly represented in pure information systems, and perhaps rightly so. For example, future information about environmental or river-basin conditions is founded only in part on geographic information, but also on physical, chemical, biological, and meteorological relationships, and on assumed social behaviors which impact the future. The geographic information base is more or less factually accurate. The relationships used to predict the future evolution of the system are built into some model, with a certain debatable level of scientific validity. The behaviors of businesses and families which impact the system through their proximate effects are based on other models or mental presumptions which are even more open to debate than the models of natural process.
Alternative ideas about problem solutions and decisions affecting the future are quite clearly beyond the scope of information systems, and may be very difficult to generate through support systems. One established approach is through optimization; however, methods for this do not usually generate alternatives, but single solutions. There are other and more serious difficulties in conventional views of finding alternatives which will appear later.
This pressure of time is exerted in part through the organization of the collaborative planning process. This governs the means by which the collaborators keep in touch with each other, the methods which are used to accept as input the queries and contributions of the participants, and the response of the support system itself. All of these factors affect the design of decision or planning support systems.
If we imagine an intense planning or decision exploration proceeding in a collaborative fashion, I would anticipate that the most frequent interchanges would revolve around a series of "what if" questions--like "what if we did it this way?", "what if you got an entirely different reaction to this investment?", or "what if the national economic environment moves in an entirely different direction?". Looking at questions of this type suggests that we must go beyond simple and possibly superficial manipulations of geographic information, and think about simulating the responses and later development of a system under the stimulus of different decisions, different behaviors, and different socio-economic environments.
We know that many computations in geographic information systems are extremely computationally intensive, and it should be apparent that extensive simulations of large spatially distributed systems are even more so. (Weather prediction is a standard example, and a convincing one.) A recent paper (Hodgson et al., 1995) provides a rare example of the discussion of this computational complexity.
Somewhat indirectly, the authors address the problem that many commercially available GIS systems seem to operate very slowly, and that in discussions of accuracy in geographic data, the computational burden of accuracy is rarely considered. Their examples deal with mapping problems, analysis methods, and a few simulations of natural systems. There is a report (with no diagnosis) of an environmental impact analysis which required eight weeks under ArcInfo on an IBM server.
Reconstructing their implicit argument, we find a number of fairly straightforward considerations. Fine temporal disaggregation implies numerous repetitive simulations. Hourly rather than daily intervals require more steps; there are almost nine thousand hours in a year, and almost 90 thousand seconds in a day. Similarly, fine spatial data is much more voluminous than coarse, in proportion to the square of the ratio of linear dimensions; a digital elevation grid has a thousand times as many cells at 30 m. resolution as at 1 km. Many geographic operations require something like the square of the number of cells; the fine scale grid would require a million times as many operations for geographic interpolation by brute force as the coarser grid, and so also would calculating intervisibility. The second example is not mentioned in the paper; the first and the amelioration of its difficulty are discussed in some detail.
The effects of authors' improvements in the process of spatial interpolation are extremely significant. One such problem required 33.6 hours using a brute force algorithm on a Sparc 5 workstation. Using an IBM SP2 parallel computer, running time on several problems was reduced in proportion to the number of "nodes", or processors engaged, up to ten. (One processor on this machine was from 1.5 to 3 times as fast as the Sparc 5 workstation.) The use of improved algorithms was even more significant. ArcInfo uses an algorithm which is twice as fast as brute force, but the authors devised a new one which is another thousand times faster. (The discussion does not scale this gain in relation to the problem size.) The combined effect of these approaches was to reduce running time to 4.4 seconds on the IBM machine, an improvement by a remarkable factor of about 30,000.
Some of these lessons can be extended to planning support systems which use simulations, and a review of the sources of computational complexity in these simulations will reveal some of the possibilities of additional future research to ameliorate them. This, then, is the topic of the next, and the main, part of this argument.
Urban systems are the complexes in which we see the life and interaction of their populations (people, households, social organizations and businesses) with each other and with the natural environment, most often through the means of the man-made environment (including buildings for homes, workplaces, and other purposes, means of travel and communication, and amenities and services like parks, schools, and fire protection). These land uses and interactions are governed and facilitated by customs, laws, private regulations and agreements, and public taxes and disbursements. For purposes of assessing the outcomes of possible decisions, public and private planners want to simulate the hypothetical impacts of these arrangements as they are modified and evolve, on the conditions living, working, learning, and relaxing, and on the interactions which the activities require and engender.
Like the two examples of geographic transformations given above, these interactions in the urban system do not depend on contiguity or narrowly defined proximity. For example, we need to explore, under varying conditions, the transformation of geographic distributions of households, workers, and income at home into new distributions of employees at work-places, students at schools, and shoppers at stores. These and other interactions are not independent of each other, and models are required which jointly consider all of them. Transport networks need to be specified and the interactions need to respond to the state of the transport system, which in turn responds to the demands made upon it.
All of this has several elementary impacts on the relation of these simulations to geographic information, and thus also to GIS. The amount of geographic information required is very large, and requires systematic storage. Much of the information is collected on the basis of census units, and can only in very few cases be easily related to the kind of raster system which is use for studying natural phenomena; the use of vector systems with their burdens in computational loads and data structures predominates. High speed computations will be essential in any event, and this implies that such simulation systems can at best be called by GIS, but need their own data structures and independence for uninterrupted operation.
These ideas can be extended slightly. Much urban data is located with respect to governmentally determined units, and units defined by improvements or geographic features such as roads, watercourses, and ridge lines. Such units include blocks, zip code areas, and civil jurisdictions. Streets, highways, rail lines, and structures are relatively permanent. The use of simulation and information jointly by many agencies, organizations, and the public is an ongoing process which requires continuity. All this implies that the simulation process requires a fixed area system, perhaps hierarchically disaggregated. This system should be organized and if necessary modified with the use of a GIS, which should update its data periodically, and which should be able to map and otherwise present the intermediate and final results of simulations.
Now as to the computations themselves: here the same conclusions apply as in purely geographic computations. Finer disaggregation should produce better results at higher computational cost. There are, however, limits to this fineness, and some doubts as to it ultimate desirability. Some disaggregations are barred for reasons of privacy. Others may be observed and reported, but not easily predicted for the future. For instance some impact of religious affiliation on locational behavior may exist, but its use is ruled out (except in very special studies) by both of these considerations. Very fine disaggregation by areas may introduce data for which errors have not been reduced by the laws of large numbers.
There is the possibility of further disaggregation by subsystems within the urban system. Such subsystems are the labor market, the housing market, the land market (which includes vacant land taken up for housing), the transport system, and many utilities and services. All of these systems influence each other by a variety of mechanisms, and these influences then feed back into their own functions. Intensifying or abating the calculation of interactions among functional systems can influence the scale of computation by a factor of up to a hundred. It also influences the possibilities for parallel computing.
Different subsystems may require different areal disaggregation for different purposes. Transport as the main object of analysis may require more detail than transport in a residential location analysis. Residential analysis may require some kind of fine-scale disaggregation for residential areas, but with larger scale employment disaggregation, and vice versa for industrial analysis.
So far, all of these considerations fit into the earlier analysis of geographic computation. There is a great deal of room for new algorithms, but finding them depends on computer experts understanding the problems of spatial planning more intimately, and on spatial planners learning more about algorithmic thinking. Parallel computing may soon be achievable through networking among numerous coordinated PCs, but this (and to an extent other parallelism) depends on how far the simulation can be broken into independent parcels. The new algorithm for spatial interpolation discussed above was made possible in part by a strong element of proximity which does not have the same force in major aspects of the urban system. Parallel computation, in the present state of the art and particularly using networked computers, depends on my view on the decomposability of the urban system into subsystems for computational purposes.
There is, however, one overwhelmingly important aspect of the complexity of spatial decision-making and planning which remains to be discussed. Ordinarily, spatial planning deals with numerous spatial decisions whose impacts are not mutually independent. The value of one change depends on the effects of others which may or may not be considered at the same time. This leads to a combinatorial problem which is virtually insoluble in any formal sense. A set of twenty possible binary decisions generates over a million possible combinations. If the computation for one combination took as long as the environmental impact study mentioned above, we would still be computing well into the next ice age and beyond, before we had examined all combinations. There are ways to reduce this difficulty, but not to eliminate it entirely, and currently the best approaches depend in large part on the organization of the planning process itself.
This is the point at which we should consider how the design of an interactive and collaborative system can influence that process. It is at best a process which seeks the answers to a very large number of "what-if" questions. A vigorous collaborative process can refine and structure the nature of these questions through the use of collective imagination and experience. But the vigor of this search for good courses of action will flag if the answer to every question takes eight weeks, eight days, or even eight hours. Ideally, such questions should be answerable at speed (say within eight minutes) in the course of a brainstorming session or a public hearing, and the results should be presented in a way which is complete and intelligible.
I believe, on the basis of the earlier discussion, and of my own experience, that these possibilities are now within reach, as to the simulations themselves. An open and accessible system along the lines I have outlined here and elsewhere will facilitate much more experiment and research, and begin to answer some open questions as well as to pose new ones.
The open question of collaborative group procedures, which is another whole arena beyond computation, presents a deeper and more complex set of issues. We cannot begin to approach them without more instruments, which include systems of the kind we are discussing, but also greater understanding of computational optimization and human creativity, and their interrelationships.