Abstract
In model selection, usually a ‘‘best’’ predictor is chosen from a collection
mˆ?, s.4 of predictors where mˆ?, s. is the minimum least-squares
predictor in a collection Us of predictors. Here s is a complexity parameter;
that is, the smaller s, the lower dimensionalrsmoother the models
in Us.
If L is the data used to derive the sequence mˆ?, s.4, the procedure is
called unstable if a small change in L can cause large changes in mˆ?, s.4.
With a crystal ball, one could pick the predictor in mˆ?, s.4 having
minimum prediction error. Without prescience, one uses test sets, crossvalidation
and so forth. The difference in prediction error between the
crystal ball selection and the statistician’s choice we call predictive loss.
For an unstable procedure the predictive loss is large.
This is shown by some analytics in a simple case and by simulation
results in a more complex comparison of four different linear regression
methods. Unstable procedures can be stabilized by perturbing the data,
getting a new predictor sequence mˆX ?, s.4 and then averaging over many
such predictor sequences.