The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We presented an application of SURF operator and epipolar constraint for stereo matching of binocular citrus images. First, R-B space images were achieved by linear transform of original images' color space. Second, fast Hessian-matrix detector was used to detect the interest points and the SURF descriptor was used to describe the points' features. Finally, Euclidean distance and the epipolar constraint...
This paper presents a method of binocular vision obstacle detection based on SIFT feature matching algorithm. First, a model of depth measurement based on stereo vision is built, it does not require resume the three-dimensional coordinate of spatial point under the world coordinate system. According to the characteristics of the model, we proposed the binocular stereo vision calibration method based...
This paper provides an intuitive way to inference the space of a scene using stereo cameras. We first segmented the ground out of the image by adaptively learning the ground model in the image. We then used the convex hull to approximate the scene space. Objects within the scene can also be detected with the stereo cameras. Finally, we organized the scene space and the objects within the scene into...
In this paper a visual Simultaneous Localization and Mapping (SLAM) algorithm suitable for indoor area measurement applications is proposed. The algorithm is focused on computational effectiveness. The only sensor used is a stereo camera placed onboard a moving robot. The algorithm processes the acquired images calculating the depth of the scenery, detecting occupied areas and progressively building...
Service robots deployed in domestic environments generally need the capability to deal with articulated objects such as doors and drawers in order to fulfill certain mobile manipulation tasks. This however, requires, that the robots are able to perceive the articulation models of such objects. In this paper, we present an approach for detecting, tracking, and learning articulation models for cabinet...
Dependable 3D perception modules are essential for safe operation of robotic platforms. Furthermore, robot navigation and localization as well as object recognition tasks also require processing 2D color camera images. This information could be synchronously delivered by stereo vision sensors with the 3D information automatically mapped onto the 2D camera image. However, embedded real-time stereo...
This paper proposes a new method of pest detection and positioning based on binocular stereo to get the location information of pest, which is used for guiding the robot to spray the pesticides automatically. The production of agricultural cultivation in greenhouse requires of big quantities of pesticides for pest control. Pesticides application is a major component of plant production costs in greenhouse,...
This paper presents a modular software architecture based on a stereo camera system that makes real-time 3D perception, interaction and navigation for small humanoid robots possible. The hardware-independent architecture can be used for several purposes, like 3D visualisation, object recognition, self-localization, collision avoidance, etc. First of all, the intrinsic calibration parameters of the...
The TI DSP (TMS320DM642 EVM) is used as the computation platform in our catcher robot system with two CCDs as source of the stereo vision. The system will separate the thrown-in target from the paired images and then calculate the centroid coordinates of each target image, thereby determining the space location of the object. The Lagrange interpolation formula and the linear function X = a Z + b are...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.