About Us | Help Videos | Contact Us | Subscriptions

Journal of Environmental Quality - Article



This article in JEQ

  1. Vol. 41 No. 6, p. 1893-1905
    unlockOPEN ACCESS
    Received: Dec 23, 2011
    Published: October 16, 2012

    * Corresponding author(s): Thomas.Orton@derm.qld.gov.au
Request Permissions


Analyzing the Spatial Distribution of PCB Concentrations in Soils Using Below–Quantification Limit Data

  1. Thomas G. Orton *a,
  2. Nicolas P. A. Saby *a,
  3. Dominique Arrouaysa,
  4. Claudy C. Joliveta,
  5. Estelle J. Villanneaua,
  6. Jean-Baptiste Paroissiena,
  7. Ben P. Marchantb,
  8. Giovanni Cariac,
  9. Enrique Barriusod,
  10. Antonio Bispoe and
  11. Olivier Briandf
  1. a INRA, US 1106 InfoSol, F-4075 Orléans, France
    b Rothamsted Research, Harpenden, Hertfordshire AL5 2JQ, United Kingdom
    c INRA, US0010 Laboratoire d’Analyse des sols, 273 rue de Cambrai, 62000 Arras, France
    d INRA–AgroParisTech, UMR1091, Environnement et Grandes Cultures, 78850 Thiverval Grignon, France
    e ADEME Waste and Soil Research Department, 20, Avenue du Grésillé, BP 90406, 49004 Angers Cedex 01, France
    f ANSES, Lab. for Food Safety, 23, Avenue du Général de Gaulle, 94706 Maisons-Alfort Cedex, France. Assigned to Associate Editor Jim Miller


Polychlorinated biphenyls (PCBs) are highly toxic environmental pollutants that can accumulate in soils. We consider the problem of explaining and mapping the spatial distribution of PCBs using a spatial data set of 105 PCB-187 measurements from a region in the north of France. A large proportion of our data (35%) fell below a quantification limit (QL), meaning that their concentrations could not be determined to a sufficient degree of precision. Where a measurement fell below this QL, the inequality information was all that we were presented with. In this work, we demonstrate a full geostatistical analysis—bringing together the various components, including model selection, cross-validation, and mapping—using censored data to represent the uncertainty that results from below-QL observations. We implement a Monte Carlo maximum likelihood approach to estimate the geostatistical model parameters. To select the best set of explanatory variables for explaining and mapping the spatial distribution of PCB-187 concentrations, we apply the Akaike Information Criterion (AIC). The AIC provides a trade-off between the goodness-of-fit of a model and its complexity (i.e., the number of covariates). We then use the best set of explanatory variables to help interpolate the measurements via a Bayesian approach, and produce maps of the predictions. We calculate predictions of the probability of exceeding a concentration threshold, above which the land could be considered as contaminated. The work demonstrates some differences between approaches based on censored data and on imputed data (in which the below-QL data are replaced by a value of half of the QL). Cross-validation results demonstrate better predictions based on the censored data approach, and we should therefore have confidence in the information provided by predictions from this method.


    AIC, Akaike Information Criterion; CE, censored data approach with trend selected for explaining data; CLC, Corine Land Cover; CM, censored data approach with trend selected for mapping; CO, censored data approach with constant mean; IM, imputed data approach with trend selected for mapping; IO, imputed data approach with constant mean; LLF, logarithmic loss function; LMM, linear mixed model; MCMC, Markov chain Monte Carlo; MCML, Monte Carlo maximum likelihood; MLLF, mean of the logarithmic loss function; MQLF, mean of the quadratic loss function; PCB, polychlorinated biphenyl; POP, persistent organic pollutant; QL, quantification limit; QLF, quadratic loss function; SOC, soil organic carbon

Polychlorinated biphenyls (PCBs) are anthropogenic persistent organic pollutants (POPs), mainly derived from industrial activities (i.e., transformers, capacitors, paints). They have been detected in various kinds of environmental compartments, where they can be bioaccumulated, particularly in soils rich in organic matter (Smith et al., 1993; Krauss et al., 2000). They are highly toxic and recalcitrant to degradation (Safe, 1994), and have been shown to be subject to long-range transport and deposition (Meijer et al., 2003; Klanova et al., 2008; Holoubek et al., 2009). The Stockholm convention of 2001 banned the production and use of PCBs worldwide, but they continue to be released from old equipment and waste sites (Breivik et al., 2002). The spatial distribution of PCB concentrations in soils is dependent on several factors, including the locations of PCB sources, transport processes, and the different rates of environmental breakdown (Safe, 1994).

The appropriate statistical treatment of data on PCB concentrations is vital for understanding the factors governing the spatial distribution of PCBs in soils. Geostatistics is the branch of statistics that deals with spatial data and enables us to interpolate measurements and represent the attendant uncertainty, and the methods can be used to assess and map the risk associated with contaminants (e.g., Cattle et al., 2002; Ouyang et al., 2003). The quality of any inference and maps produced by a geostatistical method will largely depend on the quality of the data being analyzed. Data on the concentrations of PCBs in soil are subject to limitations of measurement procedures. Measurements of samples with low content can have a high coefficient of variation, in which case their content is unable to be determined to a sufficient precision. Such measurements are often reported only as being less than some limit of quantification. In any subsequent statistical or spatial analysis, this may lead to methodological difficulties.

The quantification limit (QL) is defined as the lowest concentration at which the substance cannot only be detected, but also some predefined goals for precision and bias are met (Armbruster and Pry, 2008). A commonly applied procedure for analyzing data when some measurements are reported as being less than a QL is to replace all such data by some value, and this could be the value of the QL, or more commonly half of the QL (Hing et al., 2001; Bergstrand and Karlsson, 2009). This enables standard geostatistical techniques (Webster and Oliver, 2007) to be used for variogram estimation and kriging, and we refer to this as the imputation approach. However, this can lead to significant bias in the statistical results (De Oliveira, 2005). Alternative approaches have been developed to properly incorporate censored data in statistical analyses, since Cohen (1959) showed how they could be used to estimate the mean and variance of a normal distribution. More recently, De Oliveira (2005) and Fridley and Dixon (2007) have implemented Bayesian approaches to incorporate censored observations in geostatistical case studies, the latter using a simulation study to demonstrate better parameter estimates and more accurate predictions compared to the imputation approach.

Hoeting et al. (2006) consider explanatory variable selection for geostatistical models, and stress the importance of incorporating spatial correlation when doing so. They recommend that a model comparison measure, such as the Akaike Information Criterion (AIC) (Akaike, 1973), be used to compare models while accounting for the spatial correlation of residuals in a geostatistical framework. The AIC requires that parameters are fitted by maximum likelihood, which is straightforward enough when given complete data (i.e., “exact” observations of the primary variable at all N sample locations). However, when some of the data are only reported as being within certain bounds (i.e., censored data), the likelihood function is not available in closed form, and we must use some numerical method to approximate it. Christensen (2004) presents the Monte Carlo maximum likelihood (MCML) algorithm for approximating and maximizing the likelihood, and this algorithm can be applied when we have censored data. The method uses samples of the censored data obtained by a Markov chain Monte Carlo (MCMC) method, and Gilks et al. (1996) provide an introduction. The optimized likelihoods from different models can then be compared using the AIC.

Hoeting et al. (2006) also state that once a model is selected, the researcher is free to re-estimate model parameters by some other appropriate method for prediction. We use a fully Bayesian approach (De Oliveira, 2005; Fridley and Dixon, 2007) for prediction based on the selected model, so that the effects of the uncertainty about trend and covariance parameters are appropriately accounted for in the prediction distributions.

Cross-validation of the predictions cannot be done in the usual manner because the validation statistics—bias, mean squared error, and mean and median of the standardized prediction errors, see Lark (2000)—cannot be calculated for the censored data. We therefore consider cross-validation of the probabilities of exceeding various threshold concentrations, and calculate logarithmic and quadratic loss functions (Gneiting and Raftery, 2007) to compare results.

We consider data on the PCB-187 concentration from 105 locations in an area of northern France. A large proportion (35%) of these data were only reported as being less than a QL; the main issue of our study is the appropriate geostatistical treatment of these below-QL data. Villanneau et al. (2011) considered these data, along with data on 89 other POPs. Their analysis was primarily aimed at identifying the possibility for mapping the spatial distribution of each of the POPs; detailed consideration of the treatment of below-QL was not included, but was highlighted as an issue, and in the present study we deal with this. We first use the AIC to select the best set of explanatory variables for these data, and then predict the PCB concentration and probabilities of exceeding a concentration threshold at unsampled locations to produce maps of the study area. We compare the performances of the geostatistical methods through cross-validation.

Materials and Methods

Case Study and Data

Study Area

We chose to focus on an area in northern France characterized by various levels of PCBs (Villanneau et al., 2011). Our data are a subset of the French National Soil Monitoring Network data set, RMQS, which covers the entire French territory on a 16-km-square grid. Figure 1 shows the 25,000-km2 study area in northern France: all maps in this article were produced using the freely available software Generic Mapping Tools (Wessel and Smith, 1991). The land cover—classified according to the first level of the Corine Land Cover (CLC) 2000 database (Heymann et al., 1994)—includes arable land (the majority of which is intensively cultivated), dense urban areas, grasslands, and a few forests. The mean annual temperature is 10.8°C, and the average total rainfall is 710 mm yr−1. The prevailing wind direction is southwest to northeast.

Fig. 1.
Fig. 1.

The study area in the north of France with the land-use classes.


Soil Sampling

One hundred five sampling sites lie within the northern France study region (Fig. 1). Sampling sites were selected at the center of each grid cell. If natural soil was not present (i.e., urban area, road, river, etc.), an alternative location was selected as close as possible to the center of the cell, within a 1-km radius, to find a natural (undisturbed or cultivated) soil. For this study, samples were taken from June 2002 to November 2007. At each site, 25 individual core samples of the topsoil (0–30 cm) layer were taken, using an unaligned sampling design within a 20-m by 20-m area. Core samples were bulked to obtain a composite sample for each site. Soil samples were air-dried and sieved to 2 mm before analysis (AFNOR, 1994).

Soil Analysis

For each site, a subsample of the topsoil composite sample was analyzed for PCB-187. Analyses were developed and conducted at the Institut National de la Recherche Agronomique, Laboratoire d’analyses des sols, Arras (France). Polychlorinated biphenyls were extracted from approximately 20 g of soil by the pressure liquid extraction with a mixture of hexane/acetone (50/50; v/v) at high pressure and temperature (13.79 MPa and 150°C). The extract was recovered and transferred to an Erlenmeyer flask to be partially concentrated by rotary evaporation to a volume of about 5 mL and then completely evaporated under a gentle stream of nitrogen. The dry residue was taken up by a volume 2 mL of pure hexane. After a sufficient contact time, the final extract was then purified by adsorption chromatography on a previously conditioned Florisil column. The hexane eluate was concentrated by partial rotary evaporation to a volume of about 5 mL, then washed by a volume of concentrated sulfuric acid solution and separated by liquid/liquid partition in funnel. A volume of reduced copper turnings was placed in contact with the hexanic phase to remove sulfur. The sample was further completely evaporated under a gentle stream of nitrogen. The dry residue was taken up in a volume of 200 µL of nonane.

Purified extracts were injected into Agilent 6890 N gas chromatograph, separated on a DB5-MS capillary column (60 m by 0.25 mm by 0.25 µm) coated with phenyl/methyl-polysiloxane (5/95; v/v) and quantified by the internal calibration method built with standard solutions at 20 to 2000 ng mL−1 using a Waters Autospec Ultima high-resolution mass spectrometer. The calibration method used eight internal mass-labeled standards in 13C12 as PCB 28 L, 52 L, 118 L, 153 L, 180 L, 202 L, 206 L, and 209 L all at 400 ng mL−1. Table 1 gives the chromatographic parameters.

View Full Table | Close Full ViewTable 1.

Temperature gradient for the chromatographic analysis of polychlorinated biphenyl.

Initial temp. Speed Final temp. Hold
°C °C min−1 °C min
100 0 100 3
100 20 280 3
280 20 325 17.5

The spectrometric parameters were the Single ion recording mode and a Resolution at 10,000. For an aliquot of 20 g of soil, the quantification limits for PCBs were 0.02 µg kg−1. Polychlorinated biphenyl methods correspond to the French standard NF ISO 10382 (AFNOR, 2003) and to the French standard XP X 33-012 (AFNOR, 2000). The spatial distribution of the PCB-187 measurements is shown in Fig. 2.

Fig. 2.
Fig. 2.

The PCB-187 concentration data from the 105 sample locations.


Data Analysis Methods

In the following, we provide a brief introduction to some of the theory of classical geostatistics and the linear mixed model (i.e., the statistical model on which the methods for covariance parameter estimation and kriging prediction are based). For detailed descriptions of the theory and practice of geostatistical methods we refer to Stein (1999) for a mathematical perspective or Webster and Oliver (2007) for a more practical perspective. To implement the methods in this section we used code written for MATLAB (2004).

In this work, we consider two stages of analysis: first, a model selection stage, in which we select the best set of explanatory covariates for describing the spatial variation of PCB concentrations. Hoeting et al. (2006) stress the importance of including spatial correlation when performing geostatistical model selection. We do this by fitting parameters via MCML (Christensen, 2004; this method is appropriate when some data are censored), and comparing models using the AIC (Akaike, 1973). The goal of the second stage of analysis is to use the selected covariates to interpolate between the data locations and map the PCB concentrations or the risk of PCB contamination, for which we use a Bayesian approach. In contrast to conventional geostatistics, the Bayesian approach, implemented through MCMC sampling, fully accounts for parameter uncertainty. This can be important if the number of observations is low, or if it is important to estimate risks (e.g., the probability of exceeding some concentration threshold in the soil). We briefly describe this in the text, but provide more details in the supplemental material.

The Linear Mixed Model, Likelihood Function, and Parameter Estimation for Censored Data

We begin by describing the geostatistical linear mixed model (LMM) for observed data (i.e., if the values of all measurements were observed), and then describe the extension necessary to deal with the “unobserved” values below the quantification limit. Since the data in our case study (z) exhibited a skewed distribution, we worked with the logarithmic transform of the data (s = ln z), and we therefore describe the methods in terms of estimating parameters and predicting values of the transformed variable, S. We subsequently deal with the issue of the back-transform of predictions to the original scale of the data.

In the LMM we assume that our data can be described by the sum of a fixed effects function and a random effects function:where β0 is the intercept, and βj, j = 1, …, p − 1 are p − 1 regression coefficients associated with the p − 1 explanatory covariates, qj, j = 1, …, p − 1, which are known at the N data locations, xi. Together, β0 and the summation give the fixed effects function. The random effects are represented by ε(xi), which we assume to have a Gaussian distribution with zero mean. The spatial correlation between ε (the residuals) at different locations, ε(x1) and ε(x2), say, is described by one of the authorized covariance functions (e.g., an exponential model, see Webster and Oliver, 2007). This model gives the covariance for ε(x1) and ε(x2) as a function of their separation vector, x2x1. The model can take into account both the separation distance and direction (anisotropy), but for simplicity here, we consider isotropic covariance models that depend on the separation distance only, |x2x1|. Parameters for this model must be estimated from the data, which can then be used to calculate a covariance matrix, C, that describes the distribution of the residuals, ε. Writing in matrix notation, we put Xβ for the fixed effects model—X is the N × p fixed effects design matrix containing the known values of the explanatory covariates, and β the p-vector of regression parameters, including β0. For the random effects, we put ϑ for the vector of covariance function parameters, and write κ for the particular covariance model. We write s = [s(x1), s(x2), …, s(xN)] for the vector of observations of S at the data locations.

If we were given values for all of our measurements, then the full likelihood function (log-likelihood being presented for convenience) would be given by:

However, S is only directly observable when it is greater than some QL (which for the log-transformed variable S we write as QLS = ln QL). Whenever S(xi) < QLS, the precise value of S(xi) is not known, and the only information provided about S(xi) is that it is less than QLS. De Oliveira (2005) presents the likelihood function when some data are of this form (referred to as censored data). It is obtained by integrating the likelihood function for fully observable data, Eq. [2], over the unknown values of the censored data:where we write so for the No observed log-transformed data (i.e., all observations greater than the QL), and sc for the unknown values of S at the Nc locations with censored data (and D for the observed and censored data together).

Since the integral in Eq. [3] is analytically intractable, we use a Monte Carlo method to approximate and maximize it, namely MCML (Christensen, 2004). We provide details of this method in the supplemental material. The MCML method is based on samples drawn from a “simulating distribution” by MCMC (for an introduction, see Gilks et al., 1996), and requires just one run of the MCMC sampler to compare likelihoods from multiple models. For any model, the parameters that optimize the approximation of the likelihood, Eq. [S5] of the supplemental material, can be found numerically. The optimized likelihoods from different models can then be compared through the AIC:where k is the number of parameters in the model, and L is the maximized value of the likelihood, Eq. [S5], under that model. The model with the smallest AIC is selected, and provides a trade-off between model complexity (preferring models with fewer parameters) and the model’s maximized likelihood.

Bayesian Prediction

De Oliveira (2005) and Fridley and Dixon (2007) show how Bayesian prediction can be performed for geostatistical models based on censored data. We provide some details of the Bayesian prediction approach in the supplemental material. We drop the terms for the fixed effects and covariance models, X and κ, from our notation, since in this section we assume that these have already been selected.

Briefly, we begin by setting a prior distribution for the parameters, β and ϑ, which should describe any useful information about the values of the parameters that we have before viewing the data. In this work, we assume this information is minimal, and therefore choose the prior distribution, ƒ0(β,ϑ), in such a way as to have a very limited impact on the predictive distribution. We do this in a similar fashion to Orton et al. (2009), and as described in the supplemental material.

Based on this prior distribution, the Bayesian predictive distribution is given by an integral (De Oliveira, 2005):where D represents the full (censored and observed) data. The term f(spred|so,sc,β,ϑ) is a conditional distribution for spred, if we were given the observed data, so, the values of S at the censored data locations, sc, and the parameters on the right-hand side of the vertical line. It is given by a Gaussian probability density function with mean and variance calculated by simple kriging at xpred, conditional on so, sc, β, and ϑ. The other term inside the integral, f(sc,β,ϑ|D), represents the uncertainty in the conditioning variables and parameters, sc, β, and ϑ, and is called their posterior distribution, given the data, D. From De Oliveira (2005), this posterior distribution is:where g(sc) is the indicator function, g(sc) = 1 if all values of sc are less than QLS = lnQL, g(sc) = 0 otherwise. We approximate the integral in Eq. [5] using a MCMC method, as described in the supplemental material. We use the mean of the predictive distribution as the prediction of S(xpred), as approximated by Eq. [S15] of the supplemental material.

To provide predictions on the original scale, we use the median back-transform, which is recommended by Tolosana-Delgado and Pawlowsky-Glahn (2007) when kriging has been performed on a log-transformed variable; that is, if is the prediction of S(xpred), then we use as our prediction of Z(xpred) on the original scale. The cumulative distribution function can also be calculated, which represents the probability of S(xpred) being less than any particular value, say, spred; these values are the same on the original scale, that is, Prob[S(xpred) < spred] = Prob[Z(xpred) < exp(spred)].

Mapping Probabilities of Exceeding Threshold Concentrations

Geostatistical approaches can be used to assist with land management decisions, such as deciding where further measures—more intense sampling and a possible cleanup operation for some contaminated plots—are necessary to contain and treat pollution, and reduce environmental exposure of the population. To do this, we define a concentration threshold, zT; if the concentration at any site were known to exceed zT, then the site would be considered as contaminated. However, we only have this “perfect” knowledge at the 105 data locations; between the data locations, we have our (geostatistical) predictions, which give us a probability that the concentration at the location xpred exceeds the contamination level, . Authorities might use these probabilities to identify areas that warrant more detailed investigation. We can consider different scenarios regarding attitude to risk; a risk-averse authority might deem further investigation to be necessary if the risk of contamination, , is greater than a probability threshold of pt = 0.05, whereas a less risk-averse approach might use a higher probability threshold, pt = 0.1.

We have been unable to find much information for defining zT, the level of PCB-187 above which the soil could be considered as contaminated. In the Netherlands, a “target value” of 20 μg kg−1 has been defined for the total of six PCB congeners in soil (PCB-28, -52, -101, -138, -153, and -180; VROM, 2000). We have measurements of the concentrations of these six PCB congeners at the 105 data sites used in our analysis; a value of 20 μg kg−1 for the sum of the six PCB congeners roughly corresponds to a concentration of 2 μg kg−1 for PCB-187, and we therefore consider this as a definition for zT for mapping areas where further investigation would be required. The PCB-187 concentration for seven of our 105 data sites exceeded this threshold (see Fig. 2). We acknowledge that this threshold only constitutes a hypothetical example whose main purpose is to demonstrate and compare the geostatistical methods.

Validation of Predictions from Geostatistical Methods

When we use a geostatistical model, we should validate to confirm that the assumptions of the model are appropriate, which is typically done through a cross-validation study. However, a difficulty associated with the validation of predictions based on censored data is that the precise values of the censored data are not known. This means that the standard cross-validation statistics—the bias, mean squared error, and the mean and median of the standardized prediction errors, all computed for the transformed variable—cannot be computed. If we were to cross-validate against the values of QL/2 for the below-QL data, then the results would be biased in favor of an “imputed data” approach built on exactly these values. Therefore, we use an approach to assess how well the methods predict the probability of exceeding a threshold concentration, zT. For mapping, we focus on a single threshold, zT = 2 μg kg−1, but in the cross-validation exercise, we use a range of thresholds (all above the QL), as follows.

We apply a fivefold cross-validation routine, which entails randomly splitting the data set into five subsets of 21 data. For each subset, xB = {xB1, …, xB21}, we estimate (the probability of exceeding a threshold concentration, zT, at location xBi) based on the data from the remaining locations, xB. We then repeat this process 10 times based on different random splits of the data set. For prediction using the censored data, we use a fully Bayesian approach, and resample the covariance parameters and values of the censored data each time one of the five subsets is removed. Similarly for the imputed data approach, we re-estimate the covariance parameters each time a subset is removed.

We use two measures to assess the predicted probabilities, : the logarithmic and quadratic loss functions. We define δ(xBi) = 1 if the concentration at xBi were known to exceed zT (we refer to this as “contaminated”), and δ(xBi) = 0 otherwise. Based on the logarithmic score function (Good, 1952; Gneiting and Raftery, 2007), the logarithmic loss function (LLF) for a predicted probability of is:We use the mean of the over all locations (MLLF) as our measure of prediction performance. If the predicted probability of contamination were = 1 for every contaminated location, and = 0 for every uncontaminated location, then we would achieve the optimum value of MLLF = 0. Larger values of MLLF correspond to worse predicted probabilities. Another possible measure is based on the Brier or quadratic score (Brier, 1950; Gneiting and Raftery, 2007), giving the quadratic loss function (QLF):of which we use the mean, MQLF. Again, worse predicted probabilities give larger values of QLF. The worst case of predicting a 0 probability to all contaminated locations, and 1 to all uncontaminated locations would result in a MQLF of 1; the MLLF would be undefined in this case, since assigning a zero probability when the location is in fact contaminated (and vice versa) gives an infinite LLF. We evaluate MLLF and MQLF for values of zT from the QL up to the maximum datum of 19.2 μg kg−1.

Preliminary Analysis: Candidate Covariates

We consider several covariates for explaining and mapping the spatial variation of PCB data: the soil organic carbon (g kg−1), land use (three classes are present over the 105 data locations), elevation (m), population density (km−2), and road density (km km−2). We compare the goodness of fit of the covariates (and combinations of these covariates) based on the AIC. The following describes some preliminary analysis using these covariates, in which we define them more precisely: see Villanneau et al. (2011) for more details.

Soil Organic Carbon

The PCB concentrations in environmental media have often been observed to be affected by organic carbon content (Meijer et al., 2003). A soil that is rich in organic carbon will accumulate PCB concentrations more quickly, and we therefore consider the soil organic carbon (SOC) content as a possible explanatory variable. We have direct measurements of SOC at the 105 data locations. Elsewhere in the study area, direct SOC measurements are unavailable; however, we do have—exhaustively, from a fine grid over the study area—predictions of SOC from the mechanistic process model of Jones et al. (2005), and those from the regression model of Meersmans et al. (2012). Hereafter, we refer to these model predictions as SOCJ and SOCM, respectively. For model comparison, where the aim is to determine the best way of describing the PCB data using the explanatory variables, we use the 105 SOC measurements as a potential covariate. For mapping, we require covariate data at all prediction sites. Therefore, if SOC is chosen as an explanatory variable in the model selection step, we perform an additional “model selection for mapping” step, in which we replace the SOC measurements by the SOC model predictions, SOCJ and SOCM, in turn. Figure 3a, 3b, and 3c show the SOC measurements, SOCJ, and SOCM, respectively.

Fig. 3.
Fig. 3.

The potential covariates for explaining and mapping the spatial distribution of PCB-187 concentrations: (a) measured soil organic carbon (SOC), (b) predicted SOC from model of Jones et al. (2005), (c) predicted SOC from model of Meersmans et al. (2012), (d) elevation, (e) land-use classes from CLC database, (f) observed land-use classes at RMQS data locations, (g) population density, and (h) road density.



Elevation could act as an indicator of PCB concentrations, since deposition at low-lying sites could occur after PCB transport (either terrestrially or aerially via PCBs dissolved in water or volatilized). Figure 3d shows the elevation across the region, as derived from the 250- by 250-m digital elevation model, “BD ALTI,” which covers the entire French territory.

Land Cover

The land cover class can be considered as an indicator variable for the agricultural pressure (agricultural land will more likely have received direct input of PCBs at some point in the past) or for the soil properties (soils with high organic matter content can accumulate PCB concentrations). There are three different observed classes of land use at the 105 data locations: cultivated crops (at 73 locations), grasslands (22 locations), and forests (10 locations). We consider different expectations for each land use in the geostatistical model. Figure 3e shows the observed land use at the 105 data locations. Again, if selected as a covariate for explaining the PCB data, we consider the additional “model selection for mapping” step, using the spatially exhaustive land-use data from the CLC 2000 database (Heymann et al., 1994), shown in Fig. 3f.

Population Density

The population density is a potential explanatory variable for the spatial variation of the PCB data, since it provides an indicator of the domestic use of fossil fuels, and hence of potential PCB sources. We extracted information on population density from the French national census (1999). The population density can be defined at different spatial scales (i.e., average of the population densities, per km2, over all pixels within a radius of k km of each location). We compared several values of k in this definition to give the population density covariate; we computed the optimized likelihood function, Eq. [S5], for each value of k, with results shown in Fig. 4. We chose the value of k = 10 km, which gave the largest optimized likelihood, and therefore consider this as the population density covariate; its spatial distribution is shown in Fig. 3g.

Fig. 4.
Fig. 4.

Optimized log-likelihood with population density and road density as covariates, defined as the average of the population densities (per km2) within a radius of k km.


Road Density

The road density also provides an indicator of the domestic use of fossil fuels and potential PCB sources, and in addition may provide an indication of high industrial activity. We obtained information on road density from the road database, ROUTE 500, which covers the entire French territory. As with population density, the road density can be defined at different spatial scales. The scale of k = 7.5 km (i.e., average of the road densities, per km2, for all pixels within a radius of 7.5 km) gave the largest maximized likelihood (see Fig. 4), and we therefore consider this as the road density covariate; its spatial distribution is shown in Fig. 3h.


Model Comparison for Explaining PCB-187 Variability

We used the MCML method to compute the AIC for the 32 possible candidate models (i.e., with or without each of the five covariates included). In each case, we compared the Gaussian and exponential covariance functions (both with a nugget effect included), and the Gaussian plus nugget covariance function always gave the greater optimized log-likelihood. The AIC for the best five fixed effects models—and also for the constant mean model—are shown as AIC1 in Table 2. We considered the exponential and Gaussian covariance models in particular, because they have contrasting smoothness over short separation distances (see, e.g., Webster and Oliver, 2007). In some further analysis (not presented), we also considered the Matérn model (Matérn,1960), which includes the Gaussian and exponential models as special cases. We fitted parameters for the Matérn model with the six fixed effects models in Table 2; in each case, the parameters approached those representing a Gaussian model, and therefore we can be confident that the choice of this model was appropriate.

View Full Table | Close Full ViewTable 2.

Akaike Information Criterion (AIC) values for the best six models. AIC1, AIC2, and AIC3 use the measured soil organic carbon (SOC) and observed land-use classes (where appropriate). AIC4 uses the model predictions of Jones et al. (2005) to give the SOC and the Corine Land Cover (CLC) 2000 database (Heymann et al., 1994) to give the land use, AIC5 uses the SOC model predictions of Meersmans et al. (2012) and the CLC database for land use. AIC1, AIC2, AIC4, and AIC5 were computed using a simulating model with a constant mean and Gaussian plus nugget covariance function. AIC3 used the model selected by AIC1 (with SOC and population density as covariates and the Gaussian plus nugget covariance function) to give the simulating distribution. Simulating parameters were chosen for AIC1, AIC2, AIC4, and AIC5 based on a short initial run of the algorithm, and for AIC3 were the Monte Carlo maximum likelihood fitted parameters from AIC1.

Covariates AIC1 AIC2 AIC3 AIC4 AIC5
Population density, SOC −0.12 −0.11 −0.12 3.59 3.96
Population density, SOC, altitude 1.85 1.86 1.84 5.57 5.94
Population density, SOC, road density 1.87 1.88 1.87 5.57 5.95
Population density 2.00 2.01 2.05 NA† NA
Population density, SOC, land use 3.58 3.60 3.63 6.98 7.18
None 8.00 8.00 8.17 NA NA
NA, not applicable (in case of neither SOC nor land use being covariates).

The AIC1 results in Table 2 show that the best model for explaining the variability of the PCB-187 data was with population density and SOC as covariates; we refer to it as {XE, κE}. The fitted regression parameters were −3.9, 0.0018, and 0.046 for the constant, population density, and SOC terms, respectively. The fitted Gaussian covariance function parameters for the residuals were = 4.8 for the total variance, = 0.11 for the proportion of the variance that is spatially correlated, and = 150 km for the effective range.

The MCML method is based on an approximation of the likelihood function using values sampled from a simulating distribution, which in the case of AIC1 used a constant mean and Gaussian plus nugget covariance function. We tested the sensitivity of the results to the samples drawn from this distribution by repeating the method based on a new set of samples: the re-evaluated AIC values for the six models in Table 2 are shown in the table as AIC2, and these results demonstrate that a sufficient number of samples were taken for good approximations of the AIC values. We also tested the sensitivity of the results to the simulating model by calculating AIC values for the six models in Table 2 with simulating distribution defined using the selected model, {XE, κE} and associated MCML parameters , giving AIC values presented as AIC3 in Table 2. We rescaled AIC3 so that AIC1 = AIC3 for {XE, κE} (i.e., row 1 of Table 2, since it is only the differences in AIC that are important); the results show that the AIC values were reasonably insensitive to this choice of simulating distribution.

We note that if we were to use AIC with the imputed data approach (i.e., with the value of QL/2 replacing all below-QL data, and using the conventional geostatistical framework), then we would arrive at the same set of explanatory variables. However, by following a more stringent statistical methodology to arrive at these results using the censored data approach, we may have more confidence in them.

Model Comparison for Mapping

If we consider the selected model, XE, for mapping the PCB concentrations, a problem arises in that the SOC data (i.e., one of the selected covariates) are only available at the 105 sampling locations. Therefore, we consider an additional model selection for mapping step, in which we replace the point observations of SOC by the SOCJ and SOCM model predictions at the data locations, and replace the observed land-use classes by the classes from the CLC database. These covariates are available on a fine-scale grid of the study area, which would permit their use for mapping. We re-evaluated the AIC for the four models in Table 2 that included either SOC or land use, with results shown as AIC4 (for SOCJ) and AIC5 (for SOCM) in Table 2. These results show that the best model for mapping the PCB-187 concentrations across the study area was with population density as the single covariate. We refer to this model as {XM, κM}. The refitted regression parameters were −3.0 and 0.0017 for the constant and population density terms, and the Gaussian covariance function parameters were = 5.0, = 0.13, and = 150 km.


We used the geostatistical approaches to predict the PCB-187 concentration at the nodes of a grid (with 250-m spacings) across the study area. We investigated the effect of the uncertainty in the below-QL data by comparing the censored data approach with an imputed data approach, in which all below-QL data are set to the value of QL/2. We investigated the effect of the trend by comparing predictions based on a constant mean in the LMM with those based on the AIC-selected fixed effects model for mapping, XM. This gave four approaches to compare: (i) imputed data approach with constant mean (IO); (ii) censored data approach with constant mean (CO); (iii) imputed data with {XM, κM}, the AIC-selected fixed effects (IM); and (iv) censored data with AIC-selected fixed effects (CM). All four approaches used a Gaussian covariance function to model the spatial correlation of the residuals. For the imputed data approach, we used restricted maximum likelihood to estimate the covariance parameters, and ordinary and universal kriging to calculate the IO and IM predictions, respectively (see Webster and Oliver, 2007).

Figure 5 shows the predictions from (a) IO, (b) CO, (c) IM, and (d) CM. We note that the predictions are not appropriate in the built-up areas, where no samples were collected, and where the relationship between population density and PCB-187 concentration is not valid. Therefore, we mask the predictions in these urban areas with the highest population densities. The maps of Fig. 5a and 5b do not show the detail of the other maps, as they do not take into account the information from the covariate (population density), and just show the broad-scale spatial pattern. Figure 5c and 5d show similar general patterns to each other, due to the information from the relationship with the covariate. The main differences between Fig. 5c and 5d—the censored and imputed data approaches—are in the areas of lower predicted concentrations, where the censored data approach predicts values below the QL, but the imputed data approach does not. This was also the case without the fixed effects—see Fig. 5a and 5b. The pattern shown by the CM method would appear to make more intuitive sense, given the cluster of below-QL data that were present in the region of lowest predictions (compare with Fig. 2).

Fig. 5.
Fig. 5.

Predicted PCB-187 concentrations across the study area from (a) imputed data approach with constant mean (IO), (b) censored data approach with constant mean (CO), (c) imputed data with {XM, κM}, the Akaike Information Criterion (AIC)–selected fixed effects for mapping (IM), and (d) censored data with AIC-selected fixed effects (CM). Predictions in urban areas are masked.


The fact that all predictions from the imputed data approach were greater than the QL was somewhat surprising, since the imputed values were half of this QL. We investigated why this was the case by calculating the mean of the MCMC-sampled values for each of the censored data. For each of the 37 censored data, the mean of the samples was less than QL/2, indicating that this value is an overestimate of the actual concentrations at these locations. This, together with the fact that the Gaussian covariance function was selected to model the spatially correlated variance (which represents a high degree of smoothness in the underlying spatial process, and results in a smooth prediction surface), could explain why the imputed data approach gave no predictions below the QL, whereas the censored data approach did.

Figure 6 shows the probability of exceeding a contamination threshold of zT = 2 μg kg−1, as estimated by the (a) IO, (b) CO, (c) IM, and (d) CM approaches. Comparing Fig. 6c and 6d, we can see that the censored data approach generally gives higher probabilities of contamination than the imputed data approach in the northeast part of the study region. This was a result of larger prediction variances from the censored data method. In the imputed data method, the variance is underestimated because all imputed values are the same. The censored data method does not suffer from this problem, since the integral in the predictive distribution, Eq. [5], allows for the uncertainty in the censored data (which can take any value below the QL). In the following section, we use cross-validation to demonstrate that the censored data method gave the better assessment of uncertainty, and we may therefore have more confidence in the validity of the resulting maps.

Fig. 6.
Fig. 6.

Estimated probability of PCB-187 concentration >2 μg kg−1 from (a) imputed data approach with constant mean (IO), (b) censored data approach with constant mean (CO), (c) imputed data with {XM, κM}, the Akaike Information Criterion (AIC)–selected fixed effects for mapping (IM), and (d) censored data with AIC-selected fixed effects (CM). Predictions in urban areas are masked.



To validate the predictions from the geostatistical approaches, we used a fivefold cross-validation routine. We computed the MLLF and MQLF, using Eq. [7] and [8], for values of a concentration threshold, zT, between the QL and the maximum concentration in our data. Figure 7a, 7b, 7c, and 7d show the results. The figures also show results from a simple nongeostatistical method (S), in which we calculated the probability of exceeding zT for each location xBi based on the frequency of exceeding zT in the learning data, Z(xB): note that the MLLF plots for S finish at zT = 5 μg kg−1 because beyond this, there were cases when the predicted probability was exactly zero, when in fact Z(xBi) > zT. All plots show that generally the geostatistical methods improved predictions. Figure 7a shows the MLLF for the IM and CM methods. There was little difference between these methods in terms of their predicted probabilities of exceeding intermediate concentrations (the two lines are very close to each other between approximately zT = 0.1 μg kg−1 and zT = 1 μg kg−1). The methods did show some differences in their predicted probabilities of exceeding the more extreme concentrations. We also saw this effect (for the probability of exceeding a high concentration) in the mapping exercise, when comparing Fig. 6c and 6d. Figure 7a shows that the censored data method performed better than the imputed data method for predicting the probabilities of very low and high concentrations. Figure 7b shows similar results for the probabilities of exceeding low concentrations in terms of the MQLF, although this measure shows no differences at the higher concentrations. We also saw a very similar relationship between the results of the IO and CO methods, but for conciseness we do not present these.

Fig. 7.
Fig. 7.

Mean of the (a, c) logarithmic loss function (MLLF) and (b, d) quadratic loss function (MQLF) from cross-validation, plotted against the concentration threshold, zT (μg kg−1). All plots show results from S: simple nongeostatistical method. (a) and (b) compare methods built on imputed (IM) and censored (CM) data with fixed effects selected for mapping (population density). (c) and (d) use the censored data approach to compare the fixed effects functions: no fixed effects (CO), fixed effects selected for mapping (population density, CM) and fixed effects selected for explaining PCB data (population density and soil organic carbon, CE).


Figure 7c uses the MLLF to compare the fixed effects models. In addition to CO and CM, we also plot results for the censored data approach with {XE, κE}, the fixed effects model selected in the section, Model comparison for explaining PCB-187 variability (which included the SOC measurements as well as population density); we refer to this approach as CE. Again, the differences are more pronounced for the extreme concentrations. It can also be seen that the information from both population density and SOC helps predict the probability of exceeding a low concentration. At the high concentrations, it is mainly the information from the population density that helps the predictions. Figure 7d shows similar results in terms of the MQLF measure. These results show that if we had more intensive SOC data, this could help us better map the PCB-187 concentrations (particularly the probabilities of having concentrations below the QL). They also show that the population density provides a good covariate for mapping the probabilities of exceeding a high threshold, such as the one we used in the mapping exercise.

Discussion and Conclusions

We have used the case study of PCB-187 concentrations in a region of northern France to demonstrate how we can deal with data below the QL in a geostatistical case study for model selection, mapping of predictions and risks, and validating the results based on this type of data. For model selection we used an MCML method (Christensen, 2004) to fit parameters based on censored data; this allowed us to use the AIC to compare models. For prediction, we used a Bayesian framework (De Oliveira, 2005; Fridley and Dixon, 2007) to fully account for parameter uncertainty.

In this case study, the model selected as the best for explaining the spatial distribution of the PCB-187 data (with SOC and population density as the fixed effects) would have also been selected had we used a simple method, replacing all below-QL data by half the QL (referred to as an imputed data approach). However, that is not to say that the same model would be chosen by both approaches in other case studies; by considering a more appropriate representation of the uncertainty resulting from the below-QL data (through a method built on censored data) we may have more confidence that the selected model is not some artifact of the imputed data approach.

The covariates selected under the AIC criterion as the best for explaining the spatial distribution of the PCB-187 concentration data—SOC and population density—are in general agreement with the findings of other studies. It has been shown that high concentrations of PCBs can be associated with soils that are rich in organic matter content (due to a high rate of PCB accumulation; Smith et al., 1993; Krauss et al., 2000; Meijer et al., 2003) and proximity to industrial sources, for which we used population density as an indicator (Motelay-Massei et al., 2004; Salihoglu et al., 2011).

Because SOC measurements were only available at the 105 data locations, we could not use this variable directly for mapping. We considered other sources of spatially exhaustive SOC information (two maps of SOC model predictions across the region), but these did not explain the variability of PCB-187 concentrations as well as the measured SOC. In fact, for mapping, the model selected under AIC included just population density as the single fixed effect. The importance of the information from the fixed effects was confirmed in our cross-validation exercise.

The maps of predictions produced under the imputed and censored data approaches showed differences, especially in the areas of lowest concentrations, where the censored data approach was able to predict concentrations below the QL, but the imputed data approach did not. Furthermore, differences were evident between the maps of the predicted probabilities of exceeding a contamination threshold (based on an assumed threshold for PCB-187 of 2 μg kg−1 that we justified to some extent by comparison to a Dutch criterion; VROM, 2000). The results of our cross-validation exercise demonstrated that these differences were justified, and that the censored data approach better predicted the probabilities of exceeding threshold concentrations, particularly for the more extreme concentrations. It is possible that we would see larger differences between the methods if there was a larger proportion of spatially correlated variance (this was estimated as 0.13 by MCML with fixed effects based on the population density). We will investigate whether this would lead to larger benefits from the censored data approach in further studies.

In this case study we produced smooth maps of the underlying variation of PCB-187. More generally, observations of PCBs might include extreme values at sites of localized pollution. These extreme values can distort the underlying variation. Robust geostatistical methods are designed to deal with this issue by separating out the anomalous process from the background levels (Marchant et al., 2010; Saby et al., 2011). These methods could potentially be adapted to deal with censored data—perhaps using the cross-validation loss functions to locate the outlying observations and applying left-censored data to deal with them—but this is beyond the scope of the present article.

The practice of laboratories reporting values only for data above the QL (i.e., below-QL data are only reported as such) does not provide the most effective use of information. If repeated measurements are available (which can often be the case, as these are necessary for the laboratory to validate the QL), then geostatistical methods exist for accounting for this information and allowing for the large relative uncertainties in the measured values below the QL (e.g., Orton et al., 2009). Therefore, measurements below the QL can be treated as any other datum, and information—such as an observation close to the QL likely representing a higher concentration than an observation far below the QL—is not lost. If values below the QL are reported, it is important that they are reported with an alert that their value is subject to significant uncertainty, which must be dealt with in any subsequent statistical analysis. However, when values for the below-QL data are not presented, the framework described in this paper would appear to provide a sensible outline for geostatistical analyses of the data.

Villanneau et al. (2011) considered the possibility of spatial analyses for 90 POPs across the same region of France as the present study; 59 of these POPs had >50 (out of 105) below-QL data. Analysis of such data sets with large proportions of below-QL data by substituting values for the censored data can give biased estimates of parameters (De Oliveira, 2005) or obscure trends in the data. The work in this paper has demonstrated an appropriate methodology for dealing with below-QL data in a geostatistical analysis and by doing so we should be confident of avoiding such misleading results.


The sampling and classical soil analyses were supported by a French Scientific Group of Interest on soils: the “GIS Sol,” involving the French Ministry in charge of the Ecology, and the Sustainable Development (MEDDTL), the French Ministry in charge of the Agriculture (MAAPRAT), the French Agency for Environment and Energy Management (ADEME), and the National Institute for Agronomic Research (INRA). We thank all the soil surveyors and technical assistants involved in sampling the sites (Infosol, ISA Lille, CDA 76, CDA 80). The PCB analyses were supported by a grant from the French Observatory for the Pesticides Residues (ORP). B. P. Marchant’s contribution is part of Rothamsted Research’s program in Mathematical and Computational Biology funded by the Biotechnology and Biological Sciences Research Council through its strategic grant to Rothamsted Research. Thanks are also due to Nicolas Soler-Dominguez for preparing the fine-earth samples, to Philippe Berche who was responsible for soil archiving, and to Line Boulonne for comments on an early draft of the paper.





Be the first to comment.

Please log in to post a comment.
*Society members, certified professionals, and authors are permitted to comment.