Comparing time series is a problem of critical importance in a broad range of applications, from data mining (searching for temporal “patterns” in historical data), to speech recognition (classifying phonemes from acoustic recordings), surveillance (detecting unusual events from video and other sensory input), computer animation (concatenating and interpolating motion capture sequences), just to mention a few. The problem is difficult because the same event can manifest itself in a variety of ways, with the data subject to a large degree of variability due to nuisance factors in the data formation process. For instance, the presence of a person walking in a video sequence can vary based on the individual, his gait, location, orientation, speed, clothing, illumination etc. And yet, if I see Giorgio Picci, I can recognize him from one hundred yards away by the way he walks, regardless of what he is wearing, or whether it is a sunny or a cloudy day. One could conjecture that there must exist some statistics of my retinal signals that are invariant, or at least insensitive, to such nuisance factors and are instead Giorgio-Specific (GS). The information ought to be encoded in the temporal evolution of the retinal signals, for one can strip the images of their pictorial content by attaching light bulbs to one’s joints and turning off the lights (or use a state-of-the-art motion capture system, for instance one made by E-motion/BTS); one can still tell a great deal from just the moving dots [6].