The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
that are more similar are considered to be entries of a dictionary associated with the initial keyword used for the query. Moreover, the corresponding regions are parts of the visual lexicon describing the keyword. Also, an already existing lexicon may be iteratively updated by new features that may not match the existing
integrating both low level-visual features and high-level textual keywords. Unfortunately, manual image annotation is a tedious process and may not be possible for large image databases. To overcome this limitation, several approaches that can annotate images in a semi-supervised or unsupervised way have emerged. In this paper
To perform a semantic search on a large dataset of images, we need to be able to transform the visual content of images (colors, textures, shapes) into semantic information. This transformation, called image annotation, assigns a caption or keywords to the visual content in a digital image. In this paper we try to
Automatic image annotation is a promising key to semantic-based image retrieval by keywords. Most existing automatic image annotation approaches focused on exploring the relationship between images and annotation words and neglected the semantic information of the annotated keywords. In this paper we propose a semi
the objects effectively. In addition, how to tag the objects after the segmentation associated with keywords is also a challenge for researchers. In this study, we proposed a color differentiated fuzzy c-means (CDFCM) framework for effective image segmentation to achieve segmented objects within image which is useful for
system considering artifacts using the self-organizing map with refractoriness makes use of this property in order to retrieve plural similar images. In this image retrieval system, as the image feature, not only color information but also spectrum and keywords are employed. Moreover, the original image is divided into some
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.