Traditional item analyses such as classical test theory (CTT) use exam‐taker responses to assessment items to approximate their difficulty and discrimination. The increased adoption by educational institutions of electronic assessment platforms (EAPs) provides new avenues for assessment analytics by capturing detailed logs of an exam‐taker's journey through their exam. This paper explores how logs created by EAPs can be employed alongside exam‐taker responses and CTT to gain deeper insights into exam items. In particular, we propose an approach for deriving features from exam logs for approximating item difficulty and discrimination based on exam‐taker behaviour during an exam. Items for which difficulty and discrimination differ significantly between CTT analysis and our approach are flagged through outlier detection for independent academic review. We demonstrate our approach by analysing de‐identified exam logs and responses to assessment items of 463 medical students enrolled in a first‐year biomedical sciences course. The analysis shows that the number of times an exam‐taker visits an item before selecting a final response is a strong indicator of an item's difficulty and discrimination. Scrutiny by the course instructor of the seven items identified as outliers suggests our log‐based analysis can provide insights beyond what is captured by traditional item analyses.