The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The keyword-based Google images search engine is now becoming very popular for online image search. Unfortunately, only the text terms that are explicitly or implicitly linked with the images are used for image indexing but the associated text terms may not have exact correspondence with the underlying image semantics
associated with an image. In our approach, we divide images into small tiles and create visual keywords using a high-dimensional clustering algorithm. These visual keywords act the same as text keywords. One of the challenges of this approach is to identify an appropriate size for visual keywords. In this paper, we report our
appearance characteristics, so called visual features. This paper proposes a method to cluster the scientific documents based on visual features, so called VF-Clustering algorithm. Five kinds of visual features of documents are de-fined, including body, abstract, subtitle, keyword and title. The thought of crossover and
that are more similar are considered to be entries of a dictionary associated with the initial keyword used for the query. Moreover, the corresponding regions are parts of the visual lexicon describing the keyword. Also, an already existing lexicon may be iteratively updated by new features that may not match the existing
event can be effortlessly found using keyword matching, but there are numerous tweets that are likely to contain information that is semantically identical. Moreover, there exist many systems for recapitulating tweets related to a particular event, but they have numerous limitations and are unable to provide accurate
This work aims to build a system to suggest tourist destinations based on visual matching and minimal user input. A user can provide either a photo of the desired scenary or a keyword describing the place of interest, and the system will look into its database for places that share the visual characteristics. To that
integrating both low level-visual features and high-level textual keywords. Unfortunately, manual image annotation is a tedious process and may not be possible for large image databases. To overcome this limitation, several approaches that can annotate images in a semi-supervised or unsupervised way have emerged. In this paper
Automatic image annotation is a promising key to semantic-based image retrieval by keywords. Most existing automatic image annotation approaches focused on exploring the relationship between images and annotation words and neglected the semantic information of the annotated keywords. In this paper we propose a semi
vocabulary. A group-LASSO regularizer is used to drive as many feature weights to zero as possible. We evaluate the quality of the pruned vocabulary by clustering the data using the resulting feature subset. Experiments on PASCAL VOC 2007 dataset using 5000 visual keywords, resulted in around 80% reduction in the number of
Traditional methods for image retrieval used metadata associated with images, commonly known as keywords. These methods empowered many World Wide Web (WWW) search engines and achieved reasonable amount of accuracy. A data base shape, color, texture of content based image retrieval (CBIR) and classification algorithm
on pre-defined analysis operators which exploit keywords available in the entity view together with similarity information to produce summary information about the view contents from both a thematic and analytics perspective. In particular, smart entity views can be analyzed according to the following exploratory
image has become a hot research topic. The traditional image annotation methods represent images only by a few keywords, which cannot completely describe and rationally organize the high-level semantics of images, so it will lose a great deal of semantic information. Based on the different levels and different aspects of
Unlike traditional multimedia content, content generated on social media platforms such as YouTube, Flickr etc are usually annotated with rich set of social tags such as keywords, textual description, category information, author's profile etc. In this paper we investigate the use of such social tag information for
Finding information based on an object's profile is very useful when exact keywords for the object are unknown. Current image retrieval system all ignores the color information, for example we want to find a super-star with a piece of red petticoat, or we want to a red flower with white background. They all cannot
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.