We explore the concept of dictionary learning and sparse coding applied to audio spectrograms. First, we statistically generate a dictionary of feature vectors by sampling many columns of input spectrograms. Then, using ℓ1-regularized least-squares optimization, we transform the columns of the spectrogram into sparse coefficient vectors. Hence, the learned dictionary column features act as an overcomplete basis for the columns of the spectrograms. The dictionary generation portion of the algorithm is completely unsupervised. Next we use the coefficient data to train a support vector machine (SVM) to classify the acoustic data. Using this method, we classified one-minute audio samples of chicken vocalizations from a controlled environment into two groups: healthy and infected with infectious bronchitis (IB). We obtained a classification accuracy of 97.85%.