The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Accurate measurements on manufactured objects including free-form surfaces are of prime interest for (industrial) computer vision. However, resolution of standard CC'D sensors is a strong limitation for objects of large extent. To overcome this shortage, image sequences covering the areas of interest are acquired and registered. Making use of structured light for obtaining 3D information requires...
Solving visual features' correspondence and dealing with missing data are two factors of limitations for points registration techniques. To tackle this problem, we conceived a pattern, primarily designed for structured lighting vision systems, which may also be used for camera calibration purposes. The pattern design previously presented provides a huge of benefits. Among them, we firstly present...
In this paper, we address a novel visual servoing technique in unknown environment with untextured objects by means of a structured light vision system. The scene surfaces are assumed to be piece-wise planar, however they are free to move and subjected to some deformations. We firstly present a robust coded pattern which allows a fast decoding and which quickly solves the correspondence problem between...
The development of new medical therapies often require experiments on small animals. In order to improve the medical protocol during treatments with needles, we propose a new robotic needle insertion system using CT-scan imaging and visual servoing. The biologist defines the skin entry point and the target to be reached in the CT-image. The needle target is then expressed in the robot frame thanks...
In vision-based medical applications involving tools attached to a robotic arm, it is essential to be able to accurately localize these tools in the robot end-effector frame. Indeed, inaccurate tool-calibration has a direct impact on the accuracy of the task. In this paper, we introduce a versatile and robust calibration technique to estimate the relative position between the robot end-effector and...
In this paper we present a new monochromatic pattern for a robust structured light coding based on the spatial neighborhood scheme using the M-array approach. The proposed pattern is robust as it allows a high error rate characterized by an average Hamming distance higher than 6. We tackle the design problem with the definition of a small set of symbols associated to simple geometrical primitives...
The study of biological process evolution in small animals requires time-consuming and expansive analyses of a large population of animals. Serial analyses of the same animal is potentially a great alternative. However non-invasive procedures must be set up, to retrieve valuable tissue samples from precisely defined areas in living animals. Taking advantage of the high resolution level of in vivo...
In this paper, the real-time segmentation of surgical instruments with color images used in minimally invasive surgery is addressed. This work has been developed in the scope of the robotized laparoscopic surgery, specifically for the detection and tracking of gray regions and accounting for images of metallic instruments inside the abdominal cavity. With this environment, the moving background due...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.