Many bioinformatics datasets suffer from noise, making it difficult to build reliable models. These datasets can also exhibit class imbalance (many more examples of the negative class than the positive class), which will also affect classification performance. It is not known how these two problems intersect: no previous study has considered to what extent the noise level (total quantity of noise) and noise distribution (amount of noise in each class) affect performance when considered at the same time. To explore this question, we injected artificial class noise into twelve clean bioinformatics datasets of varying levels of class imbalance (all of which were relatively easy to learn from), varying both the level and distribution of the noise. We discovered that when the number of noisy instances is less than or equal to 40% the total number of minority-class instances, the resulting noisy datasets (regardless of which classes suffered from noise injection) are nearly as easy to build models from as the original, clean data. However, with greater levels of noise injection, the distribution does matter, and in particular it matters in proportion to the imbalance of the original (clean) dataset. If the original dataset was mostly balanced, injecting noise into the minority class will not have much more effect than injecting into the majority class, but for highly imbalanced datasets, injecting into the minority class will give results much worse than those from injecting into the majority class.