The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This paper presents a novel keyword selection-based spoken document-indexing framework that selects the best match keyword from query candidates using spoken term detection (STD) for spoken document retrieval. Our method comprises creating a keyword set including keywords that are likely to be in a spoken document
This paper describes experiments for audio clips comparison based on spoken context. The spoken content is obtained using automatic speech recognition. The social tags that are available for most of the audio clips are used as keywords. These keywords are mapped to the spoken transcription representing the audio clips
The so-called filler or garbage Hidden Markov Models (HMM) are among the most widely used models for lexicon-free, query by string key word spotting in the fields of speech recognition and (lately) handwritten text recognition. An important drawback of this approach is the large computational cost of the keyword
Semantic image retrieval using text such keywords or captions at different semantic levels has attracted considerable research attention in recent years. Automatic image annotation (AIA) has been proved to be an effective and promising solution to automatically deduce the high-level semantics from low-level visual
its relevance. During search, we retrieve similar images containing the correct keywords for a given target image. For example, we prioritize images where extracted objects of interest from the target images are dominant as it is more likely that words associated with the images describe the objects. We tailored our
mis-recognition of sub-units. To solve the problem of OOV keywords and mis-recognized words, we used individual syllables as sub-word unit in continuous speech recognition and an n-gram sequence of syllables as a retrieval unit. We propose an n-gram indexing/retrieval method with distance in a syllable lattice for
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.