The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In order to understand the underwater environment, it is essential to use sensing methodologies able to perceive the three dimensional (3D) information of the explored site. Sonar sensors are commonly employed in underwater exploration. This paper presents a novel methodology able to retrieve 3D information of underwater objects. The proposed solution employs an acoustic camera, which represents the...
Localizing a mobile robot in a given map is a crucial task for autonomy. We present an approach to localize a robot equipped with a camera in a known 2D or 3D geometrical map that is augmented with semantic information (e.g., a floor plan with semantic labels). The approach uses semantic information to mediate between the visual information from the camera and the geometrical information in the map...
We present an efficient method for geolocalization in urban environments starting from a coarse estimate of the location provided by a GPS and using a simple untextured 2.5D model of the surrounding buildings. Our key contribution is a novel efficient and robust method to optimize the pose: We train a Deep Network to predict the best direction to improve a pose estimate, given a semantic segmentation...
Scene labeling enables very sophisticated and powerful applications for autonomous driving. Training classifiers for this task would not be possible without the existence of large datasets of pixelwise labeled images. Manually annotating a large number of images is an expensive and time consuming process. In this paper, we propose a new semi-automatic annotation tool for scene labeling tailored for...
A reconstruction method is provided to improve the measurement of the binocular vision. The image projections of the 3-D point on the 3-D reference are analyzed by the multiple view geometry. The Plücker coordinates of the two screw projection lines are constructed by the image points. The line segment perpendicular to two projection lines is generated from the direction vector and the end points...
It is well known that sport skill learning is facilitated by video observation of players' actions in the target sport. A viewpoint change function is desirable when a learner observes the actions using video images. However, in general, viewpoint changes for observation are not possible because most videos are filmed from a fixed point using a single video camera. The objective of this research is...
A workflow is proposed for Cultural Heritage applications in which the fusion of 3D and 2D visual data is required. Using data acquired by cheap, standard devices, like a 3D scanner having a low quality 2D camera in it, and a high resolution DSLR camera, one can produce high quality color calibrated 3D model for documenting purpose. The proposed processing workflow combines a novel region based calibration...
In order to fast register a camera into a 3D scene model under the Manhattan-World assumption, a method of matching corresponding 2D and 3D lines based on vanishing point is proposed in this paper. Firstly, this method detects line segments and estimates three orthogonal vanishing points to determine the local length of camera and the matrix from world to camera space. Afterwards, one line is drawn...
The intensity of the light observed from every position and direction in a real scene can be modeled as a highdimensional field, namely the plenoptic function. This field codes the radiance information as a function of space, orientation, wavelength, and time. In the scope of depth estimation, several strategies have been developed to obtain a representation of the spatial structure of a scene. However,...
In this paper, the implementation to estimate the focus map spatially based on the intentional reblur of one image, which is the only input data, is presented. This enables flexible computation in the spatial domain, rather than the frequency domain. The gradient magnitude term widely used in image processing was used to derive a ratio map. The pixels closer to the focal point of the camera were on...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.