Cross-validation is commonly used to estimate the overall error rate of a designed classifier in a small-sample expression study. The true error of the classifier is a function of the prior probabilities of the classes. With random sampling these can be estimated consistently in terms of the class sample sizes, but when sampling is separate, meaning these sample sizes are determined prior to sampling, there are no reasonable estimates from the data and the prior probabilities must be “estimated” outside the experiment. We have conducted a set of simulations to study the bias of cross-validation as a function of these “estimates”. The results show that a poor choice for estimating these probabilities can significantly increase the bias of cross-validation as an estimator of the true error.