We investigate practical selection of hyper parameters for support vector machines (SVM) regression. The proposed methodology advocates analytic parameter selection directly from the training data, rather than re-sampling approaches commonly used in SVM applications. In particular, we describe a new analytical prescription for setting the value of insensitive zone, as a function of training sample size. Good generalization performance of the proposed parameter selection is demonstrated empirically using several low-dimensional and high-dimensional regression problems. Further, we point out the importance of Vapnik insensitive loss for regression problems with finite samples. To this end, we compare generalization performance of SVM regression with regression using least-modulus loss and standard squared loss. These comparisons indicate superior generalization performance of SVM regression under sparse sample settings, for various types of additive noise.