Abstract
Classical statistical theory ignores model selection in assessing estimation accuracy. Here we consider bootstrap methods for computing
standard errors and confidence intervals that take model selection into account. The methodology involves bagging, also known as bootstrap
smoothing, to tame the erratic discontinuities of selection-based estimators. A useful new formula for the accuracy of bagging then provides
standard errors for the smoothed estimators. Two examples, nonparametric and parametric, are carried through in detail: a regression model
where the choice of degree (linear, quadratic, cubic, . . .) is determined by the Cp criterion and a Lasso-based estimation problem.