The use of single-case research methods for validating academic and behavioral interventions has gained considerable attention in recent years. As such, there has been a proliferation of methods for evaluating whether, and to what extent, primary research reports provide evidence of intervention effectiveness. Despite the recent interest in harnessing single-case research to identify empirically supported strategies, examination of these tools has revealed that there is a lack of consistency in the methodological criteria sampled and scoring procedures used to evaluate primary research reports. The present study examined the extent to which various evidence rubrics addressed specific methodological features of single-case research and classified studies into similar evidence categories. Results indicated that the methodological criteria included within rubrics tended to vary, particularly for criteria related to determining the generality of the intervention under study. Moreover, there was substantial discordance observed in the evidence classifications assigned to reviewed studies. These findings are discussed in the context of the still-developing nature of single-case evidence reviews. Recommendations for both research and practice are provided.