Infant eye movements are an important behavioral resource to understand early human development and learning. But the complexity and amount of gaze data recorded from state‐of‐the‐art eye‐tracking systems also pose a challenge: how does one make sense of such dense data? Toward this goal, this article describes an interactive approach based on integrating top‐down domain knowledge with bottom‐up information visualization and visual data mining. The key idea behind this method is to leverage the computational power of the human visual system. Thus, we propose an approach in which scientists iteratively examine and identify underlying patterns through data visualization and link those discovered patterns with top‐down knowledge/hypotheses. Combining bottom‐up data visualization with top‐down human theoretical knowledge through visual data mining is an effective and efficient way to make discoveries from gaze data. We first provide an overview of the underlying principles of this new approach of human‐in‐the‐loop knowledge discovery and then show several examples illustrating how this interactive exploratory approach can lead to new findings.