The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We propose a novel geometric framework for analyzing spontaneous facial expressions, with the specific goal of comparing, matching, and averaging the shapes of landmarks trajectories. Here we represent facial expressions by the motion of the landmarks across the time. The trajectories are represented by curves. We use elastic shape analysis of these curves to develop a Riemannian framework for analyzing...
This paper investigates the effects of sampling on action recognition performance. Currently, dense (regular grid) sampling and uniform random sampling are popular strategies that achieve state-of-the-art performance. However, they are data-blind and pay equal attention to locations of different informativeness. In this paper, a Shannon information based adaptive sampling approach is proposed for...
Action recognition based on human skeleton structure represents nowadays a prosper research field. This is mainly due to the recent advances in terms of capture technologies and skeleton extraction algorithms. In this context, we observed that 3D skeleton-based actions share several properties with handwritten symbols since they both result from a human performance. We accordingly hypothesize that...
We model dyadic (two-person) interactions by discriminatively training a spatio-temporal deformable part model of fine-grained human interactions. All interactions involve at most two persons. Our models are capable of localizing human interactions in unsegmented videos, marking the interactions of interest in space and time. Our contributions are as follows: First, we create a model that localizes...
Overlapped handwriting recognition is widely used to input text in smart devices since it allows to write continuous characters on an size-restricted screens. How to segment the stroke sequences into characters is a crucial step before recognition. It is currently formulated as a two-class classification problem merely evaluating on the relationships between a pair of adjacent strokes. To facilitate...
In recent years the most popular video-based human action recognition methods rely on extracting feature representations using Convolutional Neural Networks (CNN) and then using these representations to classify actions. In this work, we propose a fast and accurate video representation that is derived from the motion-salient region (MSR), which represents features most useful for action labeling....
This paper proposes an end-to-end framework, namely fully convolutional recurrent network (FCRN) for handwritten Chinese text recognition (HCTR). Unlike traditional methods that rely heavily on segmentation, our FCRN is trained with online text data directly and learns to associate the pen-tip trajectory with a sequence of characters. FCRN consists of four parts: a path-signature layer to extract...
The path signature feature (PSF) which was initially introduced in rough paths theory as a branch of stochastic analysis, has recently been successfully applied to the field of pattern recognition for extracting sufficient quantity of information contained in a finite trajectory, but with potentially high dimension. In this paper, we propose a variation of path signature representation, namely the...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.