In an ongoing research we address the problem of representation and processing of motion information from an integrated perspective covering the range from early visual processing to higher-level cognitive aspects. Here we present experiments that were conducted to investigate the representation and processing of spatio-temporal information. Whereas research in this field is typically concerned with the formulation and implementation of visual algorithms like, e.g., navigation by an analysis of the retinal flow pattern caused by locomotion, we are interested in memory based capabilities, like the recognition of complicated gestures [16].
The result of this array of experiments will deliver a subset of parameters used for the training of an artificial neural network model. Alternatively, these parameters are important for determining the ranges of symbolic descriptions like, e.g., the qualitative approach by [11] in order to provide an user interface matched to conditions in human vision. The architecture of the neural net will be briefly sketched. Its output will be used as input for a higher-level stage modelled with qualitative means.