The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This paper presents a framework to track non-rigid objects adaptively by fusion of visual and motional feature descriptors. The proposed technique can automatically detect an object from different points of view as soon as the object starts moving. Moreover an object model is created and gradually updated using both new and previous features. As a result, the proposed technique is able to track a...
As the global threat of terrorism continues to escalate, finding efficient ways to ensure the safety of the public is becoming a major concern for the authorities. This paper presents an investigation of scanning and detection of concealed weapons with possible applications in high risk areas like airports. Using a passive and non-intrusive scanning method like Infrared (IR) imaging, and combining...
The proposed technique addresses a fusion method of two imaging sensors on pixel-level. The fused image will provide a scene representation which is robust against illumination changes and different weather conditions. Thus, the combination of the advantages of each camera will extend the capabilities for many computer vision applications, such as video surveillance and automatic object recognition...
Since the challenging visual object categorization has attracted more and more attention in recent years, we present in this paper a novel approach called statistical measures based image modeling for this problem, thus avoiding the major difficulty of the popular “bag-of-visual words” approach which needs to fix a visual vocabulary size. We use a series of statistical measures over our proper region...
Automatic TV commercial detection has become an indispensable part of content-based video analysis technique due to the explosive growth in TV commercial volume. In this paper, a multi-modal (i.e. visual, audio and textual modalities) commercial digesting scheme is proposed to alleviate two challenges in commercial detection, which are the generation of mid-level semantic descriptor and the application...
The model of visual attention infrared target detection algorithm is presented. Mainly the visual features are extracted from the brightness contrast and movement in the current frame still images and image sequences of the motion vector, and then a linear convergence significantly diagram, with locally adaptive thresholding instead of "Winner-Takes-All" neural network (Winner-Take-All,...
In this paper, we present a football event detection method by using multiple feature extraction and fusion. Instead of using low-level features, the proposed method is built upon visual, auditory features, text and audio keywords. Promising event detection results have been achieved. By using the proposed method, we have been able to detect the football events accurately. Experimental results have...
In this paper we proposed a novel feature fusion technique in Saliency-Based Visual Attention Model, presented in [Itti, 1998]. There are three conspicuity maps in Saliency-Based Visual Attention Model, which are linearly combined from 12 color maps, 6 intensity maps and 24 orientation maps (42 Feature maps overall) through an Across-scale combination and normalization. We utilized the genetic algorithm...
A novel target pseudo-color image fusion algorithm for visual and infrared images is proposed in this paper based on image features in wavelet domain. After wavelet decomposed from source images, edge features are extracted from each low frequency component. The fusion rules are defined by the edge information, using local modulus maximum rule for edge pixels and its sub-band neighboring pixels, and...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.