The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
UAVs (Unmanned Aerial Vehicles) have been widely used in power line inspections, but low autonomous cruise capacity of UAVs requires strict condition for operators and site while landing during UAV power line inspections. This paper presents an autonomous landing control technique for UAVs when charging at the electric towers based on vision positioning method. The proposed system consists of three...
Understanding the camera wearers activity is central to egocentric vision, yet one key facet of that activity is inherently invisible to the camera—the wearers body pose. Prior work focuses on estimating the pose of hands and arms when they come into view, but this 1) gives an incomplete view of the full body posture, and 2) prevents any pose estimate at all in many frames, since the hands...
Advances made with new technologies have boosted the development of systems to assist the daily lives of the visually impaired people. These systems intend to help by providing their user with some critical information about their environment using senses they can still use. In this paper, we discuss a system that uses existing technologies such as the Optical Character Recognition (OCR) and Text-to-Speech...
We present a Bayesian framework for estimating 3D human pose and camera from a single RGB image. We develop a generative model where a 3D pose is rendered onto an image (via the camera), which then generates a detection probability map for each body part. We represent a human pose with a set of 3D cylinders in space, one for each body part, and we place kinematic and self-intersection priors on the...
We propose a deep convolutional neural network for 3Dhuman pose and camera estimation from monocular imagesthat learns from 2D joint annotations. The proposed networkfollows the typical architecture, but contains an additionaloutput layer which projects predicted 3D joints onto2D, and enforces constraints on body part lengths in 3D.We further enforce pose constraints using an independentlytrained...
This paper presents a new automatic approach to building a videorama with shallow depth of field. We stitch the static background of video frames and render the dynamic foreground onto the enlarged background after foreground/background segmentation. To this end, we extract the depth information from a two-view video stream. We show that the depth cues combined with color cues improve segmentation...
We propose a novel successive convex matching method for human action detection in cluttered video. Human actions are represented as sequences of poses, and specific actions are detected by matching pose sequences. Since we represent actions as the evolution of poses and shapes, the proposed method can detect actions in videos that involve fast camera motions. Template sequence to video registration...
Images of an object undergoing ego- or camera-motion often appear to be scaled, rotated, and deformed versions of each other. To detect and match such distorted patterns to a single sample view of the object requires solving a hard computational problem that has eluded most object matching methods. We propose a linear formulation that simultaneously finds feature point correspondences and global geometrical...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.