The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Visual words of Bag-of-Visual-Words (BoVW) framework are independent each other, which results in not only discarding spatial orders between visual words but also lacking semantic information. This study is inspired by word embeddings that a similar embedding procedure is applied to a large number of visual words. By this way, the corresponding embedding vectors of the visual words can be formulated...
Keyword spotting in video document images is challenging due to low resolution and complex background of video images. We propose the combination of Texture-Spatial-Features (TSF) for keyword spotting in video images without recognizing them. First, a segmentation method extracts words from text lines in each video
In this paper, we propose a novel multi-label image annotation for image retrieval based on annotated keywords. For multi-label image annotation, a bi-coded genetic algorithm is employed to select optimal feature subsets and corresponding optimal weights for every one vs. one SVM classifiers. After an unlabelled image
paper, we propose a Bayesian approach to region-based image annotation, which integrates the content-based search and context into a unified framework. The content-based search selects representative keywords by matching an unlabeled image with the labeled ones followed by a weighted keyword ranking, which are in turn used
Automatic image annotation (AIA) plays an important role and attracts much research attention in image understanding and retrieval. Annotation can be posed as classification problems where each annotation keyword is defined as a group of database images labeled with a semantic word. It is shown that, by establishing
The design and development of an automatic identity management system for the retrieval and matching of tattoo images is considered to be very important in the advancement of the investigative capabilities of forensics as well as law enforcement agencies. Conventional tattoo-based retrieval techniques are keyword
that are more similar are considered to be entries of a dictionary associated with the initial keyword used for the query. Moreover, the corresponding regions are parts of the visual lexicon describing the keyword. Also, an already existing lexicon may be iteratively updated by new features that may not match the existing
To perform a semantic search on a large dataset of images, we need to be able to transform the visual content of images (colors, textures, shapes) into semantic information. This transformation, called image annotation, assigns a caption or keywords to the visual content in a digital image. In this paper we try to
Automatic image annotation is a promising key to semantic-based image retrieval by keywords. Most existing automatic image annotation approaches focused on exploring the relationship between images and annotation words and neglected the semantic information of the annotated keywords. In this paper we propose a semi
This paper presents a Semantic Attribute assisted video SUMmarization framework (SASUM). Compared with traditional methods, SASUM has several innovative features. Firstly, we use a natural language processing tool to discover a set of keywords from an image and text corpora to form the semantic attributes of visual
Images that used to characterize high-definition Images from web is very difficult task. So, In this paper we propose unique web Image re-ranking framework that offline and online learned Images visual and semantic meaning regarding with numerous query keywords. These visual and semantic meaning of Images extended to visual
algorithms, web image information is extracted from textual sources such as image file names, anchor texts, existing keywords and, of course, surrounding text. However, the systems that attempt to mine information for images using surrounding text suffer from several problems, such as the inability to correctly assign all
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.