The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Does there exist a compact set of visual topics in form of keyword clusters capable to represent all images visual content within an acceptable error? In this paper, we answer this question by analyzing distribution laws for keywords from image descriptions and comparing with traditional techniques in NLP, thereby
In classical image classification approaches, low-level features have been used. But the high dimensionality of feature spaces poses a challenge in terms of feature selection and distance measurement during the clustering process. In this paper, we propose an approach to generate visual keyword and combine both visual
In this paper, we propose a technique to retrieve the images using the 'search by similarity' method with the help of multimodal keywords. Multimodal keywords consist of low-level MPEG-7 color descriptors and textual keywords. The visual keywords and textual keywords are combined together and the image collection is
Recently, the development of 3D model database systems and retrieval components are becoming increasingly important due to a rapidly growing amount of available 3D models. This has made the retrieval for specific 3D models become a vital issue. Unfortunately, traditional keyword searching techniques are not always
This work aims to build a system to suggest tourist destinations based on visual matching and minimal user input. A user can provide either a photo of the desired scenary or a keyword describing the place of interest, and the system will look into its database for places that share the visual characteristics. To that
We study the problem of learning to rank images for image retrieval. For a noisy set of images indexed or tagged by the same keyword, we learn a ranking model from some training examples and then use the learned model to rank new images. Unlike previous work on image retrieval, which usually coarsely divide the images
We propose an unsupervised approach to segment color images and annotate its regions. The annotation process uses a multi-modal thesaurus that is built from a large collection of training images by learning associations between low-level visual features and keywords. Association rules are learned through fuzzy
Content-based image retrieval systems can automatically extract visual content of images which allow users to query images by their low-level features (such as color and texture). However, users usually prefer querying images based on high-level concepts such as keywords. Classifying images into a number of categories
collections, and its interface for controlling the level of details (LOD). As a preprocessing, this new system applies tree-structured clustering to images based on their keywords and pixel values, and selects representative images for each cluster. When a user specifies one or multiple keywords, CAT extracts a branch of the
semantic concepts associated with ambiguous keywords by exploiting link structure of articles in Wikipedia. In the second part, we explore an image representation in terms of keywords which reflect the semantic content of an image. Our approach is inspired by the desire to augment low-level image representation with massive
The task of ad hoc photographic image retrieval in ImageCLEF 2007 international benchmark is to retrieve relevant images in the database to the user query formulated as keywords and image examples. This paper presents rich representation and indexing technologies exploited in our system that participated in ImageCLEF
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.