A Synthesis of Current Parameterization Approaches and Needs for Further Improvements
- L.R. Ahuja and
- Liwang Ma
Parameterization of a system model includes both calibration and evaluation. Calibration is to derive a set of parameters that adequately describe a dataset from an experiment, whereas evaluation (or validation in the liberal sense) is to predict the behavior of the calibrated model under different experimental conditions (or independent datasets). However, due to the complexity of system models, there has not been a standard method to parameterize a system model. Methods reported in this book and elsewhere are often model and user dependent. Although manual calibration requires much experience on the users’ side, autocalibration is not an easy task because of the high level of skills needed for using sophisticated computer software and for constructing an objective function upon which to optimize. In addition, automated calibration procedure may vary with the data available for calibration and preference on constructing the objective function. However, these chapters clearly set up the principles for model parameterization in terms of sequential calibration of a model, quality control of input data, balanced calibration among system components, and needs for datasets that cover a wide range of experimental conditions. This chapter also identifies research needs for further improvement in model parameterization, such as better collaboration between modelers and field scientists, taking into account the spatial and temporal variability of field measurements, development of science modules in a common platform for better communication among modelers, and better education for the next generation of system modelers.Please view the pdf by using the Full Text (PDF) link under 'View' to the left.
Copyright © 2011. . Copyright © 2011 ASA, CSSA, SSSA, 5585 Guilford Rd., Madison, WI 53711-5801, USA.