The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
A discriminative ensemble tracker employs multiple classifiers, each of which casts a vote on all of the obtained samples. The votes are then aggregated in an attempt to localize the target object. Such method relies on collective competence and the diversity of the ensemble to approach the target/non-target classification task from different views. However, by updating all of the ensemble using a...
This paper presents a novel discriminative, generative, and collaborative appearance model for robust object tracking. In contrast to existing methods, we use different appearance manifolds to represent the target in the discriminative and generative appearance models and propose a novel collaborative scheme to combine these two components. In particular: 1) for the discriminative component, we develop...
This work seeks to apply the emerging virtual and mixed reality techniques to visual exploration and visualization of earth science data. A novel system is developed to facilitate a collaborative mixed reality visualization, enabling both in-situ and off-site users to simultaneously interact with and visualize science data within mixed reality realm. We implement the prototype system in the context...
Recently, convolutional neural network (CNN) models have achieved great success in many vision tasks. However, few attempts have been made to explore CNN for online model-free object tracking without time-consuming offline training. In this paper, we propose an online convolutional network (OC-N) for visual object tracking. To make the network less dependent on labeled data, K-means is employed to...
High quality face image acquisition from huge video data obtained in visual sensor network is of great significance in applications related to face processing, such as face recognition and reconstruction. This paper proposes an optimal face image acquisition method in visual sensor network, which is based on collaborative face frames acquisition and heterogeneous feature fusion-based face quality...
Previous work has developed a visual tracking algorithm, based on sparsity, that represents a target as a superposition of templates from a gallery in a fashion that the coefficients are sparsely populated. When occlusions occur, sparsity is maintained by bringing additional trivial templates (identity bases) into that gallery. While reported desirable results in visual tracking applications, several...
In this paper, we propose a novel collaborative appearance model for robust human tracking by exploiting both object and motion information in the bayesian framework. In contrast to most existing methods which use low or high-level visual cues, we use mid-level visual cues via superpixel with sufficient structure information to represent the object. In our work, the collaborative appearance is modeled...
We describe a model of “trust” in human-robot systems that is inferred from their interactions, and inspired by similar concepts relating to trust among humans. This computable quantity allows a robot to estimate the extent to which its performance is consistent with a human's expectations, with respect to task demands. Our trust model drives an adaptive mechanism that dynamically adjusts the robot's...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.