The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
To bridge the semantic gap between low-level visual features and high-level semantic concepts, this paper puts forward a novel feedback mechanism which is based on both instance and keyword features. In offline part, keyword space model is first constructed and updated using manifold ranking annotation; in online
Image annotation becomes increasingly more important as the Web continues to grow. We propose a novel approach to enhancing keyword-based Web-image annotation in folksonomy, where a volunteer user is notified what kind(s) of keywords are necessary, and that keywords have been sufficiently provided by other volunteer
retrieval scheme based on annotation keywords and visual content, which can benefit from the strength of text- and content-based retrieval. The system starts query triggered by some keywords, and further refines the retrieval result based on blobs and regions information. The first step is to complete semantic filtering with
keywords from the Web pages. The system first identifies the section of the Web page that contains the multimedia file to be extracted and then extracts it by using clustering techniques and other tools of statistical origin. Experimental results on real-world image sharing Web sites are presented and discussed in this paper
To alleviate the known semantic gap, it is necessary to integrate the two-modal parts of Web images, i.e. the low-level visual features and high-level semantic concepts (which are usually represented by keywords), for Web image retrieval. In this paper, we associate the keyword and visual features of Web images from a
vector of the wood image. This keyblock distribution based wood image retrieval algorithm is similar to the keyword based text retrieval algorithm. In the text retrieval algorithm, keyword can be used to represent the content of the text. Similarly, the keyblock in our proposed algorithm can be used to represent the content
and completeness through sense disambiguation and contextual meta-data prepossessing. Our schemes exploits a linguistic ontology identifying query relevant homographs used to construct sense specific keyword sets allowing for enhanced image search and result ranking via the calculation of relatedness between query
This paper proposes a new re-ranking scheme and presents experimental performance results for Web image retrieval with integrated query. In our previous work, cross-modal association rule was designed for associating one keyword with several visual feature clusters in Web image retrieval. Based on the cross-modal
. First, the related textual information associated with Web images is identified as the candidate annotations for Web images. Second, the word co-occurrence is utilized to eliminate irrelevant keywords for improving the annotation accuracy. Then, the keyword-based association analysis is exploited to further discover
As the search technology rapidly developed, nowadays, main search engines are already able to meet users basic search desire. However, current search algorithms or methodologies mostly depend on keywords matching process, which could be effective for text search while not efficient for keywords-lacking or non-text
We have become able to get enough approvable images of a target object just by submitting its object-name to a conventional keyword-based Web image search engine. However, because the search results rarely include its uncommon images, we can often get only its common images and cannot easily get exhaustive knowledge
where our approach is tested on images retrieved from Google keyword based image search engine. The results show that a combination of our approach as a local image descriptor with another global descriptor outperforms other approaches.
designed and implemented to resolve the problem of crossing language queries and retrieving images processes. It can greatly reduce lot of time and effort for the search. The experiments on diverse queries on Yahoo images search have shown that the proposed scheme can improve the images results for non-English keyword
comparison features in real time. In addition the img(Rummager) application can execute a hybrid search of images from the application server, combining keyword information and visual similarity. Also img(Rummager) supports easy retrieval evaluation based on the normalized modified retrieval rank (NMRR) and average precision
application value in various kinds of fields. This paper studies and discusses image media semantic description and automatic semantic annotation. By extracting SIFT visual features, we make the description of the image semantic, then establish the association between local image visual features and semantic keywords, and
neighbor search of videos from Internet. The fundamental problem lies on the scalability of a search technique, in face of the intractable volume of videos which keep rolling on the Web. In this paper, we investigate scalability of several well-known features including color signature and visual keywords for Web-based
Automatic image annotation is the process of assigning keywords to digital images depending on the content information. In one sense, it is a mapping from the visual content information to the semantic context information. In this study, we propose a novel approach for automatic image annotation problem, where the
Hausdorff distance (HD) and its modifications provides one of the best approaches for matching of binary images. This paper proposes a formalism generalizing almost all of these HD based methods. Numerical experiments for searching words in binary text images are carried out with old Bulgarian typewritten text, printed Bulgarian Chrestomathy from 1884 and Slavonic manuscript from 1574.
The associations between different modalities of Web images could be very useful for Web image retrieval. In this paper, we investigate the multi-modal associations between two basic modalities of Web images, i.e. keyword and visual feature clusters, by data mining technique. The association rule crosses two
MRF related AIA approach; we explore the optimal parameter estimation and model inference systematically to leverage the learning power of traditional generative model. Specifically, we propose new potential function for site modeling based on generative model and build local graphs for each annotation keyword. The
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.