The importance of providing explanations for predictions made by black‐box models has led to the development of explainer model methods such as LIME (local interpretable model‐agnostic explanations). LIME uses a surrogate model to explain the relationship between predictor variables and predictions from a black‐box model in a local region around a prediction of interest. However, the quality of the resulting explanations relies on how well the explainer model captures the black‐box model in a specified local region. Here we introduce three visual diagnostics to assess the quality of LIME explanations: (1) explanation scatterplots, (2) assessment metric plots, and (3) feature heatmaps. We apply the visual diagnostics to a forensics bullet matching dataset to show examples where LIME explanations depend on the tuning parameter values and the explainer model oversimplifies the black‐box model. Our examples raise concerns about claims made of LIME that are similar to other criticisms in the literature.