Model assessment involves expressing the performance of a model for a given purpose (e.g. a particular range of conditions or locations). The assessment of distributed hydrological models (i.e. spatially explicit models for hydrology) is usually done with limited point samples, but this is inadequate for assessing spatial performance. Spatial fields provide a more complete picture of 'reality' against which spatial models should be assessed. This type of data is increasingly available in hydrology, through improved remote sensing techniques and other methods of spatial sampling. There are limited examples of spatial fields being used for model assessment. In cases where they have been, the fields provided sensitive checks on the modelling and were generally also interrogated to reveal issues with model structure. However, the majority of these analyses were done visually. This is because the standard comparison methods (i.e. the objective functions) used do not currently utilise the rich information on spatial organisation that spatial fields contain. Visual comparison is a valuable method for comparing fields, as it allows background knowledge (e.g. experience, understanding of purpose) to be incorporated into the process. Unfortunately, visual comparison is neither rigorous, repeatable, unbiased nor quantitative. When measures of error or similarity are wanted, visual comparison cannot be used. It can, however, be used to learn what aspects of comparison should be pursued by any new quantitative methods. The general pattern analysis literature has been reviewed previously to identify comparison methods that can potentially emulate these aspects (Wealands et al. 2005). The methods that emulate the ability to tolerate differences in value and location between elements are pursued in this paper. These methods are for use after standard measures (e.g. bias, RMSE) have been applied. They give an overall measure of error or similarity under specified tolerances. They also produce graphical measures that can be inspected for more localised analyses. Tolerant comparisons require tolerances to be specified for differences in value (ΔV) and location (ΔL). The tolerances can be specified as crisp or fuzzy. Crisp tolerances control which values and locations between elements in the fields are judged as being equal. In contrast, fuzzy tolerances define a scale (from one to zero) to describe how similar elements are in value and location. Figure 1 shows an example of how the crisp and fuzzy tolerances can be defined. Using these tolerances, each element in a modelled field is compared to the observed field. All elements that are within a distance of ΔL of the modelled element are treated as being similar (to some degree). The tolerances (ΔV and ΔL) are combined to determine the optimum local measure for each element and this field of measures is summarised to produce a final measure. One observed field is compared against five 'model' fields, which are created by introducing distortions to the observed. The results illustrate how the measures respond to differences. They show that fields with differences within the tolerances produce equivalent results. By contrasting measures with and without tolerances, the presence/absence of shifts, noise or scale differences can be inferred. Such inferences apply to the whole field, although more localised analysis can provide information on local effects. (Graph Presented).