Spatially distributed estimates of meteorological data are becoming increasingly important as inputs to spatially explicit landscape, regional, and global models. Estimates of meteorological values such as temperature, precipitation, and evapotranspirati on rate are required for a number of landscape scale models, including those of regeneration, growth, and mortality of forest ecosystems. To calculate daily microclimate conditions in mountainous terrain, the model MT-CLIM requires minimum and maximum da ily temperature data as inputs (Running and Nemani, 1987). To compute forest evapotranspiration, landscape scale ecological models such as FOREST-BGC use spatially explicit meteorological inputs from models such as MT-CLIM (Band et. al., 1991). Accurate estimates of temperature are critical to the performance of the above models. In addition to those involved in temperature modeling, temperature prediction at unsampled sites is of interest t o individuals involved in fire management, resource management, and spraying or seeding operations.
Accurate measurements of temperature are also of interests to scientists studying the "greenhouse effect" - global warming via the entrapment of longwave radiation due to certain gases such as carbon dioxide. While there is disagreement on the extent of
global warming, most scientists estimate its effects between 0.5
Accurate temperature estimates are critical in the calibration of satellite sensors. Satellite surface temperature retrieval in mountainous terrain is complicated by the high variability of occurring temperatures and complex terrain features. While sate
llite surface temperature retrieval appears to be a promising technology, surface variations have been shown to bias temperature measurements upwards of 3.0
Given a set of meteorological data, researchers are confronted with a variety of stochastic and deterministic interpolation methods to estimate meteorological variables at unsampled locations. Spatial interpolation is often an important first step in tak ing irregular point data and converting it for use in a GIS. Depending on the spatial attributes of the data, accuracies vary widely among different spatial interpolation methods (MacEachren and Davidson, 1987; and Rhind, 1975). The choice of spatial interpolator is especially important in mountainous regions where data collection are sparse and variables may change over short spatial scales.
While there have been comparisons of interpolation methods, few research efforts have been directed towards comparing the effectiveness of different spatial interpolators in predicting temperature (eg. Van Kuilenburg et. al. (1982); Dubrule (1983); Puente and Bras (1986); Bardossy et. al. (1987); Laslett et. al. (1987); and Phil lips et. al. (1992). A review of the literature indicates that a regionalized variable such as temperature, which is strongly correlated with elevation, would be well disposed to kriging and cokriging. Due to the additional effort kriging and cokrig ing entails, it was decided to compare the effectiveness of kriging and cokriging in estimating maximum and minimum temperature at unsampled locations with other less computationally intensive techniques such as inverse distance weighted averaging, cubic splining, the trend surface analysis (TSA), polynomial regression and lapse rate methods. Kriging and cokriging have also received some criticism due to the subjective nature of variogram fitting - a central component of kriging (P hillips and Watson, 1986). In addition to the aforementioned methods, this research introduces optimal inverse distance weighting where the inverse weighting parameter is chosen on the basis of minimum mean absolute error.
In this study, eight spatial interpolation techniques were compared across two regions (eastern and western North America), two temperature variates (tmax and tmin), and three temporal scales (10 year mean, seasonal mean, daily). Each interpolation techn ique was compared on the basis of bias, mean absolute error (MAE), and mean squared error (MSE). As the true temperature surface was not known, the comparison statistics were obtained using cross validation where one data point is withheld and the remain ing data points are used to predict at the withheld point. To obtain the cross validation statistics, the technique is repeated n times. In addition, the effects of data variance, data correlation with elevation, and lapse rate on MAE were investigated . Summary statistics were used to determine if any method was significantly better than the methods tested on the basis of bias, MAE, and MSE. Summary statistics were also used to determine whether the temperature variate (tmax or tmin) or temporal scal e (10 year mean, seasonal, or daily) affected interpolation.
Inverse distance weighted averaging (IDWA) is a deterministic estimation method where values at unsampled points are determined by a linear combination of values at known sampled points. Distance-based weighting methods have been used to interpolate clim atic data ( Legates and Willmont, 1990). IDWA makes the assumption that values closer to the unsampled location are more representative of the value to be estimated than samples further away. Weights change according to the l inear distance of the samples from the unsampled point. The spatial arrangement of the samples does not affect the weights. IDWA has seen extensive implementation in the mining industry due to its ease of use. IDWA has also been shown to work well with noisy data. The choice of power parameter in IDWA can significantly affect the interpolation results. As the power parameter increases, IDWA approaches the nearest neighbor interpolation method where the interpolated value simply takes on the value of the closest sample point. Optimal inverse distance weighting is a form of IDWA where the power parameter is chosen on the basis of minimum mean absolute error.
Splining is a deterministic technique to represent two dimensional curves on three dimensional surfaces (Eckstein, 1989; Hutchinson and Gessler, 1994). Splining may be thought of as the mathemat ical equivalent of fitting a long flexible ruler to a series of data points. Like its physical counterpart, the mathematical spline function is constrained at defined points. Splines assume smoothness of variation. Splines have the advantage of creatin g curves and contour lines which are visually appealing. Some of splining's disadvantages are that no estimates of error are given and that splining may mask uncertainty present in the data. Splines are typically used for creating contour lines from den se regularly-spaced data. Splining may, however, be used for interpolation of irregularly-spaced data.
Polynomial regression is a stochastic, global technique which fits the variable of interest to some linear combination of regressor variables (Myers, 1990). In this case the variable of interest being temperature and the regress or variables being the weather station's X, Y, and Z coordinates. Typically, the goal when using polynomial regression is to obtain the best fit with the simplest model. The addition of regressor variables which do not contribute significantly to the mo del has the unwanted effect of increasing multicollinearity. Multicollinearity may negatively affect the model's ability to predict outside the convex hull of data points (Myers, 1990). In this study, temperature was fitted to first, second, and third order polynomial models of the X and Y coordinates plus elevation. The model with lowest Mallow's Cp statistic was then chosen for predicting temperature.
Trend surface analysis (TSA) can be thought of as a subset of polynomial regression. TSA is a stochastic technique which separates the data into regional trends and local variations. The regional component of TSA can be thought of as a regression surfac e fit to the data, while the local variations can be thought of as a map of residuals. Values at unsampled locations may be estimated using the mathematical relationship between the locational variables X, Y and the regionalized meteorological variable o f interest. In this study, temperature was fitted to a third order polynomial. A third order polynomial was assumed to be sufficient to capture regional temperature variations.
TSA differs from polynomial regression above in that elevation is not used in estimating temperature and TSA uses all regressors variables, not a subset chosen on the basis of Mallow's Cp. Estimation using TSA is limited by problems associated with edge effects and multicollinearity caused by spatial autocorrelation. TSA assumes errors are independent. In addition to use as a spatial interpolation technique, TSA receives use in the removal of broad trends prior to further spatial analysis such as krigi ng.
The lapse rate method uses the relationship between temperature and elevation for a region to estimate temperatures at unsampled sites. Typically, temperatures decreases as elevation increases. This relationship between temperature and elevation is know n as the lapse rate. The lapse rate method uses the temperature value of the nearest weather station and the difference in elevation to estimate temperature at the unsampled site. To estimate temperature at an unsampled site, the difference in elevation is multiplied by the lapse rate and the subsequent number is added to or subtracted from the weather station temperature to yield the site temperature. The lapse rate method makes the assumption that the lapse rate is constant for the study region.
Kriging is a stochastic technique similar to inverse distance weighted averaging in that it uses a linear combination of weights at known points to estimate the value at an unknown point. Kriging is named after D.L. Krige, who used kriging's underlying th eory to estimate ore content. The general formula of kriging however, was developed by Matheron (1969). Kriging uses a semivariogram, a measure of spatial correlation between two points, so the weights change according to th e spatial arrangement of the samples. Unlike other estimation procedures investigated, kriging provides a measure of the error or uncertainty of the estimated surface. In addition, kriging will not produce edge-effects resulting from trying to force a p olynomial to fit the data as with TSA.
Cokriging is similar to kriging except it uses additional covariates, usually more intensely sampled, to assist in prediction. Cokriging is most effective when the covariates are highly correlated. Both kriging and cokriging assume homogeneity of first differences. While kriging is considered the best linear unbiased spatial predictor (BLUP), there are problems of nonstationarity in real-world data sets.
The NWS data were read from the EarthInfo CD-ROM and then filtered using dBaseIV software. The SNOTEL data were downloaded via modem access and converted from their ASCII file format into dBaseIV format. This conversion was necessary for the filtering a nd creation of the data sets used in this comparative analysis. SNOTEL data are in a different format from NWS data. In addition, SNOTEL data are record in degrees Celsius where NWS data are recorded in degrees Fahrenheit. All SNOTEL temperature readin gs were converted to degrees Fahrenheit and reformatted to conform to the NWS data format.
The NWS data is known to have erroneous daily values resulting from errors in data-entry, data-recording, and data-formatting errors (Reek, Doty, and Owen, 1992). In their paper, Reek et. al. propose a deterministic approach for the removal of these systematic errors. Since 1991, NCDC has implemented the approach proposed by Reek et. al. The pilot program entitled Validation of Historical Daily Data (ValHiDD) was designed to improve upon an older quality assurance program calle d Geographical Edit and Analysis (GEA) NCDC estimates that errors in its historical data are small, in the area of .05% or less. In addition, most errors associated with NWS data are associated with older (pre 1960) temperature readings. As this resear ch only deals with recent (1980 - 1990) data, the data can be assumed to be relatively error free. In addition, since all interpolation methods used the same 60 data sets, any errors in the data will apply across all interpolation methods. Hence, the co nclusions of a comparative analysis would not be affected.
Missing data were handled differently depending on the temporal scale. For 10 year means, missing monthly means were averaged from adjacent months. A given weather station could have as many as two months missing out of any given year and still be inclu ded in the 10 year mean. In cases where monthly averages were missing, the first recourse of action was to attempt to calculate the monthly average from daily data. If 20 or more days were not missing for that given month, the average would be calculate d and used as the monthly average. Typically, certain stations were prone to missing data. This data averaging was necessary to ensure an adequate number of stations for the 10 year mean. Even with data averaging, nearly one third of candidate stations within the test region were dropped from this analysis due to missing data. For seasonal means, missing monthly averages were calculated from daily averages when there were 20 or more daily temperature available. For daily temperature values, stations with missing data were excluded from the analysis.
Each interpolation technique was compared on the basis of bias, mean absolute error (MAE), and mean squared error (MSE). For kriging and cokriging, cross validation techniques are used to choose the best semivariogram model from among candidate models (s pherical, exponential, or Guassian). In addition, cross validation techniques are used to select the search radius which minimize the kriging variance.
For kriging and cokriging, semivariogram modeling was accomplished using Geo-EAS software (Englund, 1988) and VARIOWIN software (Pannatier, 1994). Kriging and Cokriging are accomplished using t he GSLIB software library (Deutsch, 1992). Finally, the results of each interpolation method are examined visually using Sigma PlotTM.
This comparative effort studied three temporal scales: 10 year average, seasonal, and daily. A total of 408 temperature contour surfaces are generated. The 408 surfaces result from 10 year mean, maximum, and minimum temperatures for two regions for six interpolation techniques (1*2*2*6 = 24); four seasonal minimum and maximum temperatures for two regions for six interpolation techniques (4*2*2*6 = 96); and 12 daily mean, maximum, and minimum temperatures for two regions for six interpolation techniques (12*2*2*6 = 288). The results of the comparative analysis are presented in tabular as well as graphical format.
Summary statistics rather than analysis of variance (ANOVA) were used to determine if any interpolation method was significantly better than the methods tested on the basis of bias, MAE, and MSE. The reason for this choice was that hypothesis testing usi ng ANOVA assumes that the means compared are drawn from populations with a common variance. As this analysis provides a comparison across different temporal scales and geographics regions, the assumption of a common variance is not valid. Data attribute s were also investigated to determine whether data variance, data correlation with elevation, or lapse rate significantly affect interpolation. Because these spatial metrics are easy to calculate, they can be determined prior to interpolation to determin e which method may be most appropriate.
For all cases tested, kriging had lower MAE values when the data were anisotropic. When the data were isotropic, optimal inverse distance performed better than kriging based on MAE. Large temperature variances and temperature ranges tended to increase i nterpolator MAE. Higher correlations between temperature and elevation tended to favor polynomial regression over other interpolation techniques. For 10 year temperature means, the only situation where polynomial regression was not ranked highest based on MAE, were when correlations between elevation and temperature were lower than 0.72. For seasonal and daily data, the choice of spatial interpolator became less clear when correlations between elevation and temperature fell below 0.60. Above correlati ons of 0.60, regression appears to be the interpolator of choice.
The effects of landscape complexity did not directly affect the choice of spatial interpolator. Data attributes, however, change with landscape complexity. Region 2 had greater landscape complexity than Region 1. As a result of this increased landscape complexity, Region 2 had greater temperature variances and observed temperature ranges across all temporal scales.
In general, the results indicate that increased variance and data range result in decreased interpolator accuracy as indicated by higher MAE values. The results also indicate that interpolation techniques which use ancillary elevation information to pred ict temperature benefit from higher correlations between elevation and temperature. Of data attributes investigated, correlation between elevation and temperature and data temperature variance had strong influences of predictor performance
Range, variance, and correlation are important attributes to consider when selecting a spatial interpolator. Where temporal scales are short, preliminary data analyses are especially important to determine the suitability of a particular interpolation te chnique. The lapse rate method, cokriging, and polynomial regression cannot be recommended when correlations between temperature and elevation are below 0.72. When temperature variances are large, the performance of all interpolation techniques suffers. The larger MAE values of daily minimum and maximum temperature interpolation can be attributed to higher temperature ranges and variances.
Results from the interpolation of ten year mean maximum and minimum temperature shows that MAE increases with increasing temperature variance and temperature range. Ten year mean results indicate lower MAE values across all interpolation techniques as th e correlation between temperature and elevation increased. Polynomial regression was ranked first based on MAE for all cases except for Region 2 minimum temperature, where the correlation between elevation and temperature was 0.71. Region 2 minimum temp erature, which had the highest data variance and range also had the poorest interpolator performance based on MAE.
Results from the interpolation of seasonal mean maximum and minimum temperature was not as clear as for 10 year means. The results for seasonal means indicate that trend surface analysis (TSA), inverse distance squares (IDA), optimal inverse distance (OD A), and kriging were all rather robust to the effects of temperature range, temperature variance, and temperature correlation with elevation. The seasonal results did indicate much lower MAE values for polynomial regression across all seasons. In additi on, there was clear evidence that polynomial regression gives more accurate results which are representative of the original range of the data when correlations between temperature and elevation are high. These trends are similar across both Region 1 and Region 2. Region 2 had a wider range of temperature values and a higher overall variance than Region 1. Increased correlations between elevation and temperature resulted in better polynomial regression performance for Region 2.
Results from the interpolation of Region 1 daily maximum and minimum temperature indicates that increasing temperature variance affects interpolator performance negatively. Increased correlations between elevation and temperature had a positive effect on interpolator performance. Temperature range did not seem to affect interpolator performance. Results for Region 2 daily maximum and minimum temperature differed from Region 1 results in that temperature range had a negative effect on interpolator pe rformance. As the temperature range increased, MAE values across all interpolators increased significantly. The effects of temperature variance and correlation between elevation and temperature were the same as for Region 1. As temperature variance inc reased, MAE values across all interpolators increased. As correlations between elevation and temperature increases, MAE values dropped significantly for those interpolation methods which used elevation as ancillary information.
When correlations between temperature and elevation are low, the elevation term, Z, drops out and what is left is similar to TSA. When the elevation term of polynomial regression drops out, the method is not recommended. Of the methods tested, polynomia l regression gave the most visually plausible results. When elevation is highly correlated with temperature, the temperature contours tend to follow elevation contours. In addition, broad regional trends are captured by the x and y variables when they a re significant. Care must be taken with polynomial regression to ensure the results are representative of the original data range. Where station elevations are not representative of regional elevations (such as in Region 2) care must be taken in compari ng observed and interpolated data. For Region 2, temperature contours which used elevation were assumed to have slightly lower mean temperatures with somewhat lower minimum temperatures than the observed data because the DEM contained higher elevations t han did the stations.
In general, cokriging did not perform well in this study. The restrictions placed on parameter selection due to semivariogram models being strictly positive definite may have adversely affected cokriging results. In addition, the high variability of tem perature data does not appear to lend itself to cokriging. The greater the data variability, the more difficult the data are to fit. Cokriging requires the fitting of two semivariograms and one semi-crossvariogram for each data set.
Additional data would enhance this research effort. The application of this research methodology to larger data sets such as the NCAR World Monthly Surface Station Climatology (Spangler and Jenne, 1988) would be of interest as no study has been conducted using elevation to assist in temperature prediction at global scales.
This page was created with the Hot Dog Pro Web Page Editor