For a non-linear parameterised model, the effects of withdrawing an example from the training set can be predicted. We focus on the prediction of the error on the left-out example, and of the confidence interval for the prediction of this example. We derive a rigorous expression of the first-order expansion, in parameter space, of the gradient of a quadratic cost function, and specify its validity conditions. As a consequence, we derive approximate expressions of the prediction error on a given example, and of the confidence interval thereof, had this example been withdrawn from the training set. We show that the influence of an example on the model can be summarised by a single parameter. These results are applicable to leave-one-out cross-validation, with a considerable decrease in computation time with respect to conventional leave-one-out. The paper focuses on the theoretical aspects of the question; both academic illustrations and large-scale industrial examples are described in [9].