Repository URL:
http://philsci-archive.pitt.edu/id/eprint/11833
Author(s):
Werndl, Charlotte
preprint description
Many examples of calibration in climate science raise no alarms regarding model reliability. We examine one example and show that, in employing Classical Hypothesis-testing, it involves calibrating a base model against data that is also used to confirm the model. This is counter to the "intuitive position" (in favour of use-novelty and against double-counting). We argue, however, that aspects of the intuitive position are upheld by some methods, in particular, the general Cross-validation method. How Cross-validation relates to other prominent Classical methods such as the Akaike Information Criterion and Bayesian Information Criterion is also discussed.