Given recent advances in sensing technology, including performance improvements, as well as reduction in size and cost, prognostics and health management is gaining increased popularity as a method to ensure reliability and safety of engineered systems. While many methods have been developed for a wide range of systems, relatively little research has examined the potential benefits of concepts from reliability engineering such as fault-tolerance. This paper applies majority fault tolerance to improve the accuracy of classifications made by machine learning algorithms. Support vector machine, artificial neural network, and naive Bayes algorithms are applied to a single data set. The reliability and correlations between the individual classifiers are analyzed. Our results suggest that fault-tolerance can improve the accuracy of classification, but that correlation between the individual algorithms can lower the overall effectiveness of the approach. Thus, detailed assessment of alternative algorithms will be necessary before the full potential of fault tolerant classification can be achieved.