The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We investigate pedestrian detection in depth images. Unlike pedestrian detection in intensity images, pedestrian detection in depth images can reduce the effect of complex background and illumination variation. We propose a new feature descriptor, Histogram of Depth Difference(HDD), for this task. The proposed HDD feature descriptor can describe the depth variance in a local region as Histogram of...
We present an actively altruistic mobile robot system that implements a neural network to perform head pose tracking to identify a person who looks lost. Our results show that a 3-level discretization of head pose performed better than 9 levels, but that 9 levels perform better at detecting head pose change. We also present results from analyzing a method that determines whether or not a person looks...
This paper suggests an algorithm of reactive eye movement and corresponding facial expressions to make a robot lifelike for improving human and robot interaction (HRI). Difference-image process, afterimage process, concentration process, and eyelid movement process are suggested to determine the amount of reactive eye movement from single camera input images. Then, a simple emotion generation process...
In this paper, we present a new vision-based multiple human tracking system. This novel 3D visual tracking system is capable of automatically identifying, labeling and tracking multiple humans in real-time even when they occlude each other. Furthermore, the multiple human tracker was implemented in a vision driven robot system for human robot interaction. The distributed system comprises of 4 subsystems:...
This paper proposes the constant execution time multiple human detection system. The human detection from video sequence is important in a monitoring camera system and a robot vision. It is important not to change execution time in human detection system regardless number of persons. In this paper, the feature amount of an image was calculated by using Cubic Higher-Order Local Auto Correlation (CHLAC)...
The study concerns a 2D safety vision system for human-robot collaborative work environments. The vision system is characteristic of having a freedom from structuring the environment by setting passive markings, i.e. neither painting on the floor nor hanging a camera up on a ceiling is necessary. To preserve safety states, we set the passive markings in the original image based upon Shannon information...
In this paper, we present an approach, inspired by human behavior, in predicting the state of a high speed object based on the state of another object that causes such a high speed, e.g. predicting the state of a high speed puck which is hit by a lower speed paddle in the air hockey game. The proposed approach eliminates the need for high speed sensors, such as high speed cameras and grabbers, for...
Simulated reality environment incorporating humans and physically plausible behaving robots, providing natural interaction channels, with the option to link simulator to real perception and motion, is gaining importance for the development of cognitive, intuitive interacting and collaborating robotic systems. In the present work we introduce a head tracking system which is utilized to incorporate...
This paper introduces the OP:Sense system that is able to track objects and humans and. To reach this goal a complete surgical robotic system is built up that can be used for telemanipulation as well as for autonomous tasks, e.g. cutting or needle-insertion. Two KUKA lightweight robots that feature seven DOF and allow variable stiffness and damping due to an integrated impedance controller are used...
Video analysis aiming at efficient pedestrian detection is an important research area in computer vision and robotics. Although this is a well studied topic, successful detection still remains a challenge in outdoor, low resolution images. We present efficient detection metrics which consider the fact that human movement presents some characteristic patterns. Unlike many methods which perform an intra-blob...
In this article, we discuss how to recognize each group of humans and objects that interact with each other, for some activity such as a conversation by some people looking at the same screen, from observation by cameras. Although the previous work on recognizing human behaviors by cameras mainly discusses those of a single person, such as moving from one place to another, we rather focus on the interactions...
Indirect Immunofluorescence (IIF) on Human epithelial (HEp-2) cells test has been the golden standard for identifying the presence of Anti-Nuclear Antibodies (ANA) due to its high sensitivity and the large range of antigens that can be detected. Furthermore, IIF ANA test allows the positive sample strength (sample end point titre) to be reported. Despite its advantages, the IIF ANA test needs to be...
Recent algorithms for monocular motion capture (MoCap) estimate weak-perspective camera matrices between images using a small subset of approximately-rigid points on the human body (i.e.\ the torso and hip). A problem with this approach, however, is that these points are often close to coplanar, causing canonical linear factorisation algorithms for rigid structure from motion (SFM) to become extremely...
We propose a robust method of estimating head orientation based on HOG. The proposed method is able to estimate head orientation with a camera even though when a user is not facing the camera. With this method, a head orientation can be estimated precisely in all three axes: roll, yaw, pitch. Furthermore, a simple and robust user identification method is composed by using the results of the Approximate...
In this paper, we proposed a pointing interaction system which allows users to control devises in Intelligent Space by using hand pointing. This system consists of multiple camera devices and a pan-tilt projector. This system recognizes user's face, user's head orientation and a spot where user points with the camera devices. However, to achieve a real interaction, an intuitive feedback is required...
Existing high dynamic range(HDR) imaging acquisition techniques are limited to the skill of the end user and offline. We propose an algorithm to detect HDR scene and suggest optimal exposure sets for a given scene.
Posture language is rich in ways for individuals to express a variety of desire, feelings and thoughts. Recognizing human posture via computer is a challenging task as it involved multiple issues ranging from image, recognition algorithm and system resources. This proposed work aimed to solve viewpoint variation issue through causal topology design Hidden Markov Model (HMM) for view independent multiple...
This paper is dedicated to people tracking and identification in the multi-camera surveillance system. In the proposed method, each people-image is extracted among each camera and then is labeled with its color vector. Color vector provides a similar probability for each person appeared in different camera¡¦s surveillance frame. By combining the pedestrian¡¦s trajectory with relations among different...
In this paper, a new approach for detecting alive humans in destructed environments using an autonomous robot is proposed. Human detection in an unmanned area can be done only by an automated system. Alive human body detection system proposed a monitoring system using ultrasonic sensors and camera to record, transmit and analyze conditions of human body. The task of identify human being in rescue...
Machine vision is the application of computer vision and related technologies to industrial automation. Automated visual inspection is one of these applications, which can be used to solve many problems in industry. Industrial applications require customized solutions, subject to several particular constraints. The final step of integration of a vision system to an industrial process is not an easy...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.