The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this paper, we proposed a new 3D object retrieval method based on the visual keywords. In our method, the visual keywords are generated from the clusters of relative angle context distribution, which provides a statistical shape context that captures local shape characters and is also rotational and scale invariant
This paper presents contextual kernel and spectral methods for learning the semantics of images that allow us to automatically annotate an image with keywords. First, to exploit the context of visual words within images for automatic image annotation, we define a novel spatial string kernel to quantify the similarity
and completeness through sense disambiguation and contextual meta-data prepossessing. Our schemes exploits a linguistic ontology identifying query relevant homographs used to construct sense specific keyword sets allowing for enhanced image search and result ranking via the calculation of relatedness between query
paper, we propose a Bayesian approach to region-based image annotation, which integrates the content-based search and context into a unified framework. The content-based search selects representative keywords by matching an unlabeled image with the labeled ones followed by a weighted keyword ranking, which are in turn used
Automatic image annotation is the process of assigning keywords to digital images depending on the content information. In one sense, it is a mapping from the visual content information to the semantic context information. In this study, we propose a novel approach for automatic image annotation problem, where the
MRF related AIA approach; we explore the optimal parameter estimation and model inference systematically to leverage the learning power of traditional generative model. Specifically, we propose new potential function for site modeling based on generative model and build local graphs for each annotation keyword. The
Managing photos by using visual features (e.g., color and texture) is known to be a powerful, yet imprecise, retrieval paradigm because of the semantic gap problem. The same is true if search relies only on keywords (or tags), derived from either the image context or user-provided annotations. In this paper we present
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.