The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
approach has a limit as only the annotations of found images during the interaction are updated. In this paper we introduce a novel method of semi-automatic annotation. The method is using visual feature representations of keywords which are improved during the region-based relevance feedback. The experiments show that this
Automatically assigning relevant text keywords to images is an important problem. Many algorithms have been proposed in the past decade and achieved good performance. Efforts have focused upon model representations of keywords, but properties of features have not been well investigated. In most cases, a group of
Automatic image annotation is a technique by which computer systems automatically assigns appropriate Keywords to input digital image. Smart cities are characterized by large volume of data, one of the prominent data types are images. In present research, images of smart cities are collected and then using automatic
Automatic image annotation (AIA) plays an important role and attracts much research attention in image understanding and retrieval. Annotation can be posed as classification problems where each annotation keyword is defined as a group of database images labeled with a semantic word. It is shown that, by establishing
that are more similar are considered to be entries of a dictionary associated with the initial keyword used for the query. Moreover, the corresponding regions are parts of the visual lexicon describing the keyword. Also, an already existing lexicon may be iteratively updated by new features that may not match the existing
, the improved model is capable of discovering the correlation between blobs (segmented regions) and textual keywords so as to automatically generate keywords for un-annotated image according to joint probabilities. Moreover, it has the ability to detect and remove false keyword(s) by considering the co-occurrence of
Automatic image annotation is a promising methodology for image retrieval. However most current annotation models are not yet sophisticated enough to produce high quality annotations. Given an image, some irrelevant keywords to image contents are produced, which are a primary obstacle to getting high-quality image
its relevance. During search, we retrieve similar images containing the correct keywords for a given target image. For example, we prioritize images where extracted objects of interest from the target images are dominant as it is more likely that words associated with the images describe the objects. We tailored our
Automatic image annotation is an important but highly challenging problem in semantic-based image retrieval. In this paper, we formulate image annotation as a supervised learning image classification problem under region-based image annotation framework. In region-based image annotation, keywords are usually
In order to enable more effective image retrieval via keywords, automatic image annotation and categorization becomes an important problem in computer vision and content based image retrieval. Unfortunately, there exists a semantic gap between the low-level feature vectors and the high-level semantics or concepts. In
classifiers are discriminatively trained from images with multiple associations, including spatial, syntactic, or semantic relationship, between images and concepts. The proposed approach was evaluated on a Corel dataset with 374 keywords, and the TRECVID 2003 dataset with ten selected concepts. When compared with state-of-the
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.