The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Recently, the common objects are often required to be extracted from a group of images in many applications, such as video coding and model training. Co-segmentation is a new and efficient method for this requirement. In realistic applications, we observe that the images usually contain similar backgrounds (namely similar scene co-segmentation), such as the city landmark images collected from the...
Since the optical flow method can't estimate the large displacement, the two-dimensional compression expansion method is proposed in this article to compensate for the large-scale movements in the image before the optical flow estimation. As a result, the related effects of decorrelation caused by the lateral displacement of the longitudinal compression can be effectively eliminated. Experimental...
This paper proposes a method to segment object with logos. In the method, we firstly locate the logos by SIFT matching. Then, the object boundary is extracted based on the logo location. Finally, we model the object prior based on the boundary, and introduce the prior into Markov random field segmentation method to segment the object. To verify the proposed method, we collect a logo dataset from the...
In this paper, a novel method is proposed to predict attention in image scenes by using a central stimuli sensitivity based saliency model. The proposed method is based on the general “center-surround” visual attention mechanism and the spatial frequency response of the human visual system (HVS). Following three biologically inspired principles, the saliency value is computed by two “scatter matrices”...
In this paper, a new co-segmentation model by incorporating active contours based method and rewarding strategy is represented. We first generate co-segmentation energy function from two aspects. One is foreground similarity between image pairs. The other is background consistency in each single image. Then, we optimize the energy function through a mutual optimization approach. We verify the proposed...
Visual saliency detection provides an important methodology for many computer vision applications. In this paper, we propose a novel method to detect salient regions from an image. To detect pixel-level saliency, this method uses joint embedding of spatial and color cues, i.e., spatial constraint based saliency, color double-opponent saliency, and similarity distribution based saliency. Finally, a...
In this paper, we propose a method for saliency detection based on Boosting algorithms in still images. Compared to saliency detectors of pixel level based, we detect salient regions of an image based on sub-windows at any locations and sizes. For each window, we compute a set of features including local contrast, gradient histogram contrast. We construct our detector based on a cascade AdaBoost classifier...
Salient objects extraction has been a hot research topic recently, and it is very helpful in many other fields of computer vision, such as image recognition, content-based image retrieval, and image compression. In this paper, we propose a novel method to extract the salient objects from natural images. In the proposed algorithm, we partition the image into patches of homogeneous regions, and then...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.