The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Deep learning had a significant impact on diverse pattern recognition tasks in the recent past. In this paper, we investigate its potential for keyword spotting in handwritten documents by designing a novel feature extraction system based on Convolutional Deep Belief Networks. Sliding window features are learned from
effective in terms of better precision. Proposed method makes use of keyword clusters for query expansion. Visual features are used for detecting duplicate images in proposed method. Removing duplicates leads to further improve in precision and recall in retrieval result
Spoken keyword recognition has been under the spotlight for the past several decades, but has gained significant attention in recent years due to the rapid increase in front-end technology applications for mobile and wearable computing. This work presents the trade-off in performance between Artificial Neural Networks
Keyword extraction aims to find representative phrases for a document. Graph-based keyword extraction represent the input document as a graph and rank its nodes according to their score using graph-based ranking method. In this paper, we propose a method to compute importance of co-occurrence word in document and
We present a novel approach to query-by-example keyword spotting (KWS) using a long short-term memory (LSTM) recurrent neural network-based feature extractor. In our approach, we represent each keyword using a fixed-length feature vector obtained by running the keyword audio through a word-based LSTM acoustic model
Keyword spotting in speech is a very well-researched problem, but there are almost no approaches for singing. Most speech-based approaches cannot be applied easily to singing because the phoneme durations in singing vary a lot more than in speech, especially the vowel durations. To represent expected phoneme durations
In this paper, a method of automatic Chinese keyword extraction based on KNN is proposed. Firstly, it preprocesses the document by vector space model. Secondly, it constructs a set of candidate keywords based on KNN method and the labeled dataset. Finally, it post-processes on candidate keywords by the character of
The problem of automatically extracting the most interesting and relevant keyword phrases in a document has been studied extensively as it is crucial for a number of applications. These applications include contextual advertising, automatic text summarization, and user-centric entity detection systems. All these
approach has a limit as only the annotations of found images during the interaction are updated. In this paper we introduce a novel method of semi-automatic annotation. The method is using visual feature representations of keywords which are improved during the region-based relevance feedback. The experiments show that this
a keywords Extraction algorithm of Chinese documents based on TEXT-NET is proposed. By using Semantic similarity computation of Howmet theory, a text is mapped a TEXT-NET, and then combined with complex network theory and statistical methods to extract keywords. Experimental results show that the recall and precision
the userpsilas acoustic signal from a singing voice and retrieves the music information using both lyrics and melody information. The lyrics recognition module uses a keyword spotting system based on text-content of the lyrics by an HMM comparison engine. The melody recognition module extracts pitch and MFCC features
Automatically assigning relevant text keywords to images is an important problem. Many algorithms have been proposed in the past decade and achieved good performance. Efforts have focused upon model representations of keywords, but properties of features have not been well investigated. In most cases, a group of
The Fisher kernel is a generic framework which combines the benefits of generative and discriminative approaches to pattern classification. In this contribution, we propose to apply this framework to handwritten word-spotting. Given a word image and a keyword generative model, the idea is to generate a vector which
Database (HMDB), a collection of realistic video clips. The detection and localization paradigm we introduce uses a keyword model for detecting key activities or gestures in a video sequence. This process is analogous to the use of keyword or key-phrase detection in speech processing. The method learns models for the
, the improved model is capable of discovering the correlation between blobs (segmented regions) and textual keywords so as to automatically generate keywords for un-annotated image according to joint probabilities. Moreover, it has the ability to detect and remove false keyword(s) by considering the co-occurrence of
Image search re-ranking, as an effective tool to improve the text-based image search result, has been adopted by many commercial search engines nowadays. Given a query keyword, images are first retrieved based on the textual information. Then visual features are extracted from images to reorder them by mining their
. Especially as a part of the interest router table, each K-bucket stores a certain number of the peers' information that have high interest similarity. The query can be executed in the appropriate k-bucket by calculating interest similarity and interest keyword. Through mining the latent interest, we found that two peers having
Abstract-By analyzing the process of classification and MapReduce computing paradigms, it is found that the parallel and distributed computing model in MapReduce is appropriate for constructing classifier model. This paper presents a MapReduce algorithm for parallel and distributed classification, aiming to reduce the computational time in training process on large scale documents. Our experiment...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.