The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Markov Model/ Artificial Neural Network (HMM/ANN) keyword spotting framework. The feature extraction method used was Mel-Frequency Cepstral Coefficients (MFCC). The ANN is a 3-layer feedforward neural network using Multi-Layer Perceptron (MLP). In recognizing the words, an HMM decoder was used which implemented the Viterbi
In this paper, we present an acoustic keyword spotter that operates in two stages, detection and verification. In the detection stage, keywords are detected in the utterances, and in the verification stage, confidence measures are used to verify the detected keywords and reject false alarms. A new confidence measure
Being able to search for words or phrases in historic handwritten documents is of paramount importance when preserving cultural heritage. Storing scanned pages of written text can save the information from degradation, but it does not make the textual information readily available. Automatic keyword spotting systems
Web-based mapping applications such as Google Maps or Virtual Earth have become increasingly popular. However, current map search is still keyword-based and supports a limited number of spatial predicates. In this paper, we build towards a natural language query interface to spatial databases to answer crime-related
This paper surveys intelligent systems (IS) applications using a literature review and classification of articles from 1956 to 2009 with a keyword index and article abstract in order to explore how IS applications in the field of fraud detection and prevention have developed during this period. Based on the scope of
Sanitization; the process of disguising sensitive information by overwriting it with realistic looking but false data of a similar type. The system uses the method of gibberish word substitution to sanitize the keywords. Since nouns and verbs provide the most information in a sentence, they will be treated as keywords and the
over the past 20 years and the annual paper production in 2007 was about three times that in 1999. The analytical results eventually lead to several key findings. Several author keywords became the focus in the last few years, and might be a new research direction in the future. There are clear distinctions among author
According to the human cognitive process of text understanding, text cognitive function is proposed to describe the capability of text understanding, which can be expressed by an addition of various extracted keywords and relations among keywords. Text energy and text information are defined as two characteristics in
extraction methods; multiclass support vector machine, multilayer neural network and nearest neighbour classifiers are combined together in order to classify and to find the appropriate keywords for this content. The color histograms and moments are used in this paper as features to represent image content. We support our case
filtering recommendation is implemented using intelligent agents. The agents work together for recommending meaningful training courses and updating the course information. The system uses a users profile and keywords from courses to rank courses. A ranking accuracy for courses of 90% is achieved while flexibility is achieved
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.