This paper discusses the use of the Sparse Matrix Transform (SMT) to model the covariance structure of high-dimensional data in the likelihood ratio test used for hypothesis testing. The SMT has been shown to produce more accurate estimates of covariance matrices when the number of training samples n is much less than the number of dimensions p of the data. Several experiments with face recognition and hyperspectral images show that SMT-based hypothesis testing can be superior to other methods in at least two general aspects: First, the SMT-based method is more robust to the size of the training set, remaining accurate even when only a few training samples are available; Second, the total computation required to apply the method is very low, making it attractive for use in low-power devices, or in applications requiring fast computation.