The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Numerous computer vision problems such as stereo depth estimation, object-class segmentation and fore-ground/background segmentation can be formulated as per-pixel image labeling tasks. Given one or many images as input, the desired output of these methods is usually a spatially smooth assignment of labels. The large amount of such computer vision problems has lead to significant research efforts,...
We consider object recognition in the context of lifelong learning, where a robotic agent learns to discriminate between a growing number of object classes as it accumulates experience about the environment. We propose an incremental variant of the Regularized Least Squares for Classification (RLSC) algorithm, and exploit its structure to seamlessly add new classes to the learned model. The presented...
Visual perception is a fundamental component for most robotics systems operating in human environments. Specifically, visual recognition is a prerequisite to a large variety of tasks such as tracking, manipulation, human–robot interaction. As a consequence, the lack of successful recognition often becomes a bottleneck for the application of robotics system to real-world situations. In this paper we...
In this paper we present an efficient active learning strategy applied to the problem of tactile exploration of an object's surface. The method uses Gaussian process (GPs) classification to efficiently sample the surface of the object in order to reconstruct its shape. The proposed method iteratively samples the surface of the object, while, simultaneously constructing a probabilistic model of the...
In this paper we tackle the problem of object recognition using haptic feedback from a robot holding and manipulating different objects. One of the main challenges in this setting is to understand the role of different sensory modalities (namely proprioception, object weight from F/T sensors and touch) and how to combine them to correctly discriminate different objects. We investigated these aspects...
The development of reliable and robust visual recognition systems is a main challenge towards the deployment of autonomous robotic agents in unconstrained environments. Learning to recognize objects requires image representations that are discriminative to relevant information while being invariant to nuisances, such as scaling, rotations, light and background changes, and so forth. Deep Convolutional...
Multi-task learning is a natural approach for computer vision applications that require the simultaneous solution of several distinct but related problems, e.g. object detection, classification, tracking of multiple agents, or denoising, to name a few. The key idea is that exploring task relatedness (structure) can lead to improved performances. In this paper, we propose and study a novel sparse,...
In this paper we tackle the problem of estimating the local compliance of tactile arrays exploiting global measurements from a single force and torque sensor. The proposed procedure exploits a transformation matrix (describing the relative position between the local tactile elements and the global force/torque measurements) to define a linear regression problem on the unknown local stiffness. Experiments...
In this paper we propose a weighted supervised pooling method for visual recognition systems. We combine a standard Spatial Pyramid Representation which is commonly adopted to encode spatial information, with an appropriate Feature Space Representation favoring semantic information in an appropriate feature space. For the latter, we propose a weighted pooling strategy exploiting data supervision to...
Recent developments in learning sophisticated, hierarchical image representations have led to remarkable progress in the context of visual recognition. While these methods are becoming standard in modern computer vision systems, they are rarely adopted in robotics. The question arises of whether solutions, which have been primarily developed for image retrieval, can perform well in more dynamic and...
In this paper we present and start analyzing the iCub World data-set, an object recognition data-set, we acquired using a Human-Robot Interaction (HRI) scheme and the iCub humanoid robot platform. Our set up allows for rapid acquisition and annotation of data with corresponding ground truth. While more constrained in its scopes -- the iCub world is essentially a robotics research lab -- we demonstrate...
The paper aims at building a computer vision system for automatic image labeling in robotics scenarios. We show that the weak supervision provided by a human demonstrator, through the exploitation of the independent motion, enables a realistic Human-Robot Interaction (HRI) and achieves an automatic image labeling. We start by reviewing the underlying principles of our previous method for egomotion...
We present an original method for independent motion detection in dynamic scenes. The algorithm is designed for robotics real-time applications and it overcomes the short-comings of current approaches for the egomotion estimation in presence of many outliers, occlusions and cluttered background. The method relies on a stereo system which performs the reprojection of a sparse set of features following...
We propose an algorithm for the visual detection and localisation of the hand of a humanoid robot. This algorithm imposes low requirements on the type of supervision required to achieve good performance. In particular the system performs feature selection and adaptation using images that are only labelled as containing the hand or not, without any explicit segmentation. Our algorithm is an online...
We propose an algorithm for the visual detection and localisation of the hand of a humanoid robot. This algorithm imposes low requirements on the type of supervision required to achieve good performance. In particular the system performs feature selection and adaptation using images that are only labelled as containing the hand or not, without any explicit segmentation. Our algorithm is an online...
Visual motion is a simple yet powerful cue widely used by biological systems to improve their perception and adaptation to the environment. Examples of tasks that greatly benefit from the ability to detect movement are object segmentation, 3D scene reconstruction and control of attention. In computer vision several algorithms for computing visual motion and optic flow exist. However their application...
Visual motion is a simple yet powerful cue widely used by biological systems to improve their perception and adaptation to the environment. Examples of tasks that greatly benefit from the ability to detect movement are object segmentation, 3D scene reconstruction and control of attention. In computer vision several algorithms for computing visual motion and optic flow exist. However their application...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.