Serwis Infona wykorzystuje pliki cookies (ciasteczka). Są to wartości tekstowe, zapamiętywane przez przeglądarkę na urządzeniu użytkownika. Nasz serwis ma dostęp do tych wartości oraz wykorzystuje je do zapamiętania danych dotyczących użytkownika, takich jak np. ustawienia (typu widok ekranu, wybór języka interfejsu), zapamiętanie zalogowania. Korzystanie z serwisu Infona oznacza zgodę na zapis informacji i ich wykorzystanie dla celów korzytania z serwisu. Więcej informacji można znaleźć w Polityce prywatności oraz Regulaminie serwisu. Zamknięcie tego okienka potwierdza zapoznanie się z informacją o plikach cookies, akceptację polityki prywatności i regulaminu oraz sposobu wykorzystywania plików cookies w serwisie. Możesz zmienić ustawienia obsługi cookies w swojej przeglądarce.
This paper deals with automatic estimation of the horizon in videos from fixed surveillance cameras. The proposed algorithm is fully automatic in the sense that no user input is needed per-camera and it works with various scenes (indoor, outdoor, traffic, pedestrian, livestock, etc.). The algorithm detects moving objects, tracks them in time, assesses some of their geometric properties related to...
Obstacle detection and tracking is a fundamental task for several Advanced Driver Assistance Systems (ADAS) and self-driving cars. Several approaches have been presented in the literature in the last years and many of them are based on visual sensors. In this paper we propose an approach that uses only stereo cameras to detect and track obstacles and compute visual odometry to get the vehicle's ego-velocity...
Human motion capture systems are being increasingly studied in the area of computer vision and also by major entertainment industries. These systems are able to track the position and orientation of joints of the body and its trajectory in space over a period of time. They are used in various applications such as digital games, animation of virtual characters for film and television, gesture recognition,...
This study proposes a method for analysis and calibration of the geo-coding error in mapping of the moving targets, 'movers', detected in the Wide Area Motion Imagery (WAMI). To estimate the accurate position of the movers, an iterative Ordinary Least Squares (OLS) optimisation of the camera model parameters is performed. The OLS is set to minimise the Euclidean distance between the locations of the...
This paper presents a fast and accurate method for real-time 3D reconstruction by using a depth camera and inertial sensor. Generally, the localization information of the camera is obtained from the depth data by using the ICP (Iterative Closest Point) algorithm. When the depth camera moves fast, ICP will converge to a bad local minimum which results in tracking failure. To prevent this case, a camera...
This work presents a mixed reality environment for orthopaedic interventions that provides a 3D overlay of Cone-beam CT images, surgical site, and real-time tool tracking. The system uses an RGBD camera attached to the detector plane of a mobile C-arm, which is a typical device to acquire X-Ray images during surgery. Calibration of the two devices is done by acquiring simultaneous CBCT and RGBD scans...
Diminished reality (DR) is a technique to remove undesirable objects from a video stream in real time. DR methods calculate a user's camera pose using vision- or sensor-based approaches to recover and overlay a background image to the camera view. Relying on 6DoF camera registration methods, DR results are often ruined due to misregistration. To solve this problem, we propose a registration framework...
A significant issue associated with the use of video see-through head-mounted displays (VST-HMD) for augmented reality is the presence of latency between real-world images and the images displayed to the HMD. For a static scene, this latency provides no real problem, however for dynamic scenes, which arise when the HMD user moves their head, when real-world objects move, or a combination of the two,...
This paper presents ARial Texture: which is a drone display with dynamic projection mapping on the drone's propellers. In our prototype, a motion capture system tracks the drone's position and orientation and a projector projects an image on the drone's four propellers. To evaluate the visibility of the display, we conducted quantitative and qualitative experiments in which the propellers were covered...
Reflections can obstruct content during video capture and hence their removal is desirable. Current removal techniques are designed for still images, extracting only one reflection (foreground) and one background layer from the input. When extended to videos, unpleasant artifacts such as temporal flickering and incomplete separation are generated. We present a technique for video reflection removal...
First-person videos (FPVs) captured by wearable cameras have undesired shakiness because of fast changing views. When existing video stabilization techniques are applied, FPVs are transformed into cinematographic videos, losing the First-person motion information (FPMI) such as the recorder's interests and actions. We propose a system that can enhance viewability of FPVs by stabilizing them while...
Camera pose estimation is a fundamental problem of Augmented Reality and 3D reconstruction systems. Recently, despite the new better performing direct methods being developed, state-of-the-art methods are still estimating erroneous poses due to sensor noise, environmental conditions and challenging trajectories. Adding a back-end mapping process, SLAM systems achieve better performance and are more...
In this paper, we present a tracking system to estimate the position of a surgical instrument used in minimally invasive spine surgeries for training. The purpose of our system is to get the information about movements and surgeons skills during the training. The system uses four infrared markers embedded on the surgical instrument of common used. At least two Wii Remote Control is needed for calculating...
Body sensor networks (BSNs) have been increasingly used in medical applications such as exoskeleton control, powered prosthesis control, tremor suppression, gesture and sign language recognition systems, and human computer interfaces. This review explores the use of multi-modal sensor fusion in BSNs for the detection, measurement and classification of upper limb for the control of dynamic systems...
Fully automated vehicles and mobile robots operate in a shared environment with pedestrians. To minimize the risk for pedestrians, it is very important to track them in a precise way. As cameras are often installed in surveillance situations, they are used for tracking pedestrians in a shared environment. To improve the accuracy of the tracking, it is necessary to include all available context information...
Our goal is to automatically detect which direction a child is facing based on a single, simple overhead picture, and track that direction across time. Engaging in joint attention, which is the shared focus of two individuals on some object of interest, is a strong cue of typically developing children, and the lack thereof can be an indicator of autism spectrum disorder or other pervasive developmental...
One of the main problems that are being addressed intensively in modern societies is the ageing of population. Today's challenge is to allow elderly people to remain autonomous at their home as much as possible. Currently, one of the active research fields is the development of an assistive living system (ALS) that aims to support people at home. This can help elderly people to stay at home as long...
We present an approach for real-time, robust and accurate hand pose estimation from moving egocentric RGB-D cameras in cluttered real environments. Existing methods typically fail for hand-object interactions in cluttered scenes imaged from egocentric viewpoints—common for virtual or augmented reality applications. Our approach uses two subsequently applied Convolutional Neural Networks (CNNs) to...
A device for a low-to-intermediate level of gesture recognition which uses a passive thermal-infrared (PIR) sensor array is described. The detection system which discriminates between a small number of simple dynamic gestures, such as ‘hand swiping’ in different directions and at varying velocities. The technology is low powered, in terms of energy consumption and computational power. The sensor enables...
In Industry 4.0 scenarios, autonomously navigating robots will have to perform dedicated tasks in controlled environments, such as production halls or storage facilities. In the presence of pedestrians and other dynamic objects, robust collision detection is imperative in order to avoid harm of human or material. Supplementary sensors as part of the infrastructure may provide additional real-time...
Podaj zakres dat dla filtrowania wyświetlonych wyników. Możesz podać datę początkową, końcową lub obie daty. Daty możesz wpisać ręcznie lub wybrać za pomocą kalendarza.