The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Recently in the task of human action recognition, Deep Convolutional Neural Networks (ConvNets) based approaches have achieved good performance. A series of approaches employ a two-stream architecture which takes advantage of both appearance information and motion information. However there are some drawbacks of the two-stream architecture. First, it didn't fully utilize the temporal information of...
In this paper, we propose to predict the quality score of saliency map by only looking over the saliency map itself. In order to achieve this goal, we propose deep saliency quality assessment network (DSQAN), to predict the saliency quality scores directly from saliency maps. First of all, we model this saliency quality assessment task as a regression problem. To train an efficient regression model,...
There are two important aspects in human action recognition. The first one is how to locate the area that better indicates what the subjects in the videos are doing. The second one is how we can utilize the appearance and motion information from the video data. In this paper, we propose a gaze-assisted deep neural network, which performs the action recognition task with the help of human visual attention...
This paper proposes a novel saliency object detection method by using the mid-level and high-level visual cues. In the mid-level objectness evaluation, we generate three complementary saliency maps, such as the multi-scale segmentation cue, the background cue and the spatial color distribution cue. The first cue is used to highlight the objects via the local region segment. The second cue uses the...
In this paper, we propose a framework to discover and segment favorite object from the natural images. The main idea is to first generate the shape based common template of the favorite object using the images collected from the web. Then, the common template is used to extract the favorite object from the original images. In the common template generation, co-segmentation is used to provide the initial...
In this paper, we propose a novel graphical model considering saliency (GMS) to classify and annotate the finegrained bird breed. The processing can be divided into four steps. Firstly, each image is over-segmented into several regions. Then, we use GMS to perform the classification and annotation based on the region level and patch level feature. To further improve the precise of classification,...
Salient objects extraction has been a hot research topic recently, and it is very helpful in many other fields of computer vision, such as image recognition, content-based image retrieval, and image compression. In this paper, we propose a novel method to extract the salient objects from natural images. In the proposed algorithm, we partition the image into patches of homogeneous regions, and then...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.