The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
retrieval scheme based on annotation keywords and visual content, which can benefit from the strength of text- and content-based retrieval. The system starts query triggered by some keywords, and further refines the retrieval result based on blobs and regions information. The first step is to complete semantic filtering with
In this paper, we propose a novel multi-label image annotation for image retrieval based on annotated keywords. For multi-label image annotation, a bi-coded genetic algorithm is employed to select optimal feature subsets and corresponding optimal weights for every one vs. one SVM classifiers. After an unlabelled image
In this paper, we propose a novel strategy at an abstract level by combining textual and visual clustering results to retrieve images using semantic keywords and auto-annotate images based on similarity with existing keywords. Our main hypothesis is that images that fall in to the same textcluster can be described
In this paper, we describe the use of a Boosting algorithm, Real AdaBoost, for content-based image retrieval (CBIR) on a large number (190) of keyword categories. Previous work with Boosting for image orientation detection has involved only a few categories, such as a simple outdoor vs. indoor scene dichotomy. Other
integrating both low level-visual features and high-level textual keywords. Unfortunately, manual image annotation is a tedious process and may not be possible for large image databases. To overcome this limitation, several approaches that can annotate images in a semi-supervised or unsupervised way have emerged. In this paper
We propose an unsupervised approach to segment color images and annotate its regions. The annotation process uses a multi-modal thesaurus that is built from a large collection of training images by learning associations between low-level visual features and keywords. Association rules are learned through fuzzy
This paper presents a new method of automatic image annotation based on visual cognitive theory that improves the accuracy of image recognition by taking two semantic levels of keywords that give feedback to each other into consideration. Our system first segments an image and recognizes objects in the K-Nearest
of correlation. Most of the existing image retrieval systems are based on text search using keywords that are annotated manually which involve the intellectual and emotional sides of the human. But in our proposed system this process is somewhat automatic.
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.