The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This paper presents contextual kernel and spectral methods for learning the semantics of images that allow us to automatically annotate an image with keywords. First, to exploit the context of visual words within images for automatic image annotation, we define a novel spatial string kernel to quantify the similarity
Semantic image retrieval using text such keywords or captions at different semantic levels has attracted considerable research attention in recent years. Automatic image annotation (AIA) has been proved to be an effective and promising solution to automatically deduce the high-level semantics from low-level visual
regions and words. The third and fourth approaches are based on segmenting the images into homogeneous regions. Both of these approaches rely on a clustering algorithm to learn the association between visual features and keywords. The clustering task is not trivial as it involves clustering a very high-dimensional and sparse
using feature vector. We do static analysis over computed features to get distinguishing feature descriptors. Maximum similarity i.e. minimum distance allows us to find the query relevant combined pictures and associated relevant words. For textual part of the query we compute the concepts (keywords as well as synonyms of
With the popularization of Web and image devices, there are more and more digital images available on the Internet. How to effectively organize and manage these Web images becomes a critical issue. In this paper, a novel approach that employs the relationships between words to achieve Web image annotations is proposed
Automatic image annotation is the process of assigning keywords to digital images depending on the content information. In one sense, it is a mapping from the visual content information to the semantic context information. In this study, we propose a novel approach for automatic image annotation problem, where the
In this paper, we propose a new method to select relevant images to the given keywords from the images gathered from the Web. Our novel method is based on the probabilistic latent semantic analysis (PLSA) model, which is a generative probabilistic topic model. Firstly, we gather images related to the given keywords
Automatic image annotation is the process of assigning relevant keywords to the images. It is considered to be potential research area in current scenario. Annotation to an image can be defined as the information which could describe an image by considering three ways i.e. when these images were taken, what are the
With the large number of Web sites promoting the use of illicit drugs, it has become important to screen these sites for the protection of children on the Internet. Conventional keyword-based approaches are not sufficient because these Web sites often have lots of images and little meaningful words than prices. We
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.