The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In classical image classification approaches, low-level features have been used. But the high dimensionality of feature spaces poses a challenge in terms of feature selection and distance measurement during the clustering process. In this paper, we propose an approach to generate visual keyword and combine both visual
associated with an image. In our approach, we divide images into small tiles and create visual keywords using a high-dimensional clustering algorithm. These visual keywords act the same as text keywords. One of the challenges of this approach is to identify an appropriate size for visual keywords. In this paper, we report our
Inspired by the keyword-based text filter, this paper proposes an image filter which detects the spam image by matching with user-specified image content. In this way, detecting image spam e-mail is converted into image matching process. Stable local feature detection and representation is a fundamental component of
Automatic image annotation is the process of assigning keywords to digital images depending on the content information. In one sense, it is a mapping from the visual content information to the semantic context information. In this study, we propose a novel approach for automatic image annotation problem, where the
We propose an unsupervised approach to segment color images and annotate its regions. The annotation process uses a multi-modal thesaurus that is built from a large collection of training images by learning associations between low-level visual features and keywords. Association rules are learned through fuzzy
Content-based image retrieval systems can automatically extract visual content of images which allow users to query images by their low-level features (such as color and texture). However, users usually prefer querying images based on high-level concepts such as keywords. Classifying images into a number of categories
vocabulary. A group-LASSO regularizer is used to drive as many feature weights to zero as possible. We evaluate the quality of the pruned vocabulary by clustering the data using the resulting feature subset. Experiments on PASCAL VOC 2007 dataset using 5000 visual keywords, resulted in around 80% reduction in the number of
We propose an unsupervised approach to segment color images and annotate its regions. The annotation process uses a multi-modal thesaurus that is built from a large collection of training images by learning associations between low-level visual features and keywords. We assume that a collection of images is available
done on a set of data is chosen to form the basis as done with keywords. If the base data is chosen arbitrarily, it is automatic, whereas some 'knowledge' or 'background' is put in the choice it is adaptive. Statistical features of the images are extracted from the pixel map of the image. The extracted features are
The task of ad hoc photographic image retrieval in ImageCLEF 2007 international benchmark is to retrieve relevant images in the database to the user query formulated as keywords and image examples. This paper presents rich representation and indexing technologies exploited in our system that participated in ImageCLEF
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.