The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Text analysis of a web page is more difficult than the analysis of the text of normal document due to the presence of additional information, such as HTML structure, styling codes, irrelevant text, and presence of hyperlinks. In this paper, we propose an unsupervised method to extract keywords from a web page. The
This paper proposes a new methodology that automatically generates English mnemonic keywords to support the learning of basic Japanese vocabulary. A new phonetic algorithm, called JemSoundex, is also introduced for phonetically transliterating the Japanese and English languages for phonetic matching. The effective
This paper proposes a new keyword extraction method that uses bag-of-concept to extract keywords from Arabic text. The proposed algorithm utilizes semantic vector space model instead of traditional vector space model to group words into classes. The new method built word-context matrix where the synonym words will be
Most approaches towards automatic evaluation of free text answers are keyword centric. Though keywords essentially reflect and represent the primary concept coverage of an answer, they are incomplete without the associated texts. The words occurring before and after the keywords bring out the true meaning. The work
This article presents a method for automatic tagging of Youtube videos. The proposed method combines an automatic speech recognition (ASR) system, that extracts the spoken contents, and a keyword extraction component that aims at finding a small set of tags representing a video. In order to improve the robustness of
This paper proposes an emotion classification method for spoken utterances using a spoken-term detection (STD) method. This is a keyword extraction method using spoken utterances. The extracted keywords are used to decide on the emotion category of an utterance. Most keywords extracted by the STD system are redundant
Keywords and searching template, the word segmentation algorithm based on the dictionary of keyword, the storage of searching template and the algorithm of template matching. On the foundation, we implement a QA system for Railway domain application, the experimental result show that QA system based on techniques we employed
) specific to emotions and story genres and (iv) synthesis of story speech using mark-up language and prosody modification factors. Keyword and part-of-speech (POS) features are used for story-genre classification and emotion prediction. The prosody modification factors are derived carefully by analyzing the perceptual
Basque required the use of morphemes and other sub-word units. Additionally, some keyword spotting and semantic methods have been also applied in the system in order to retrieve information properly. In most of the cases, the methods employed during this project could suit the requirements of many under-resourced languages
verification in software development process. We developed a dictionary tool to support the translation from natural language to formal language. The tool provides functionalities those are easy registration of keywords to the dictionary and exhaustive marking of the keywords. The dictionary represents a map between equivalent
understanding of domain in which semantics of data is machine understandable. Second, we make in Raspberry Pi an interface which has the capability to recognize speech queries and give an oral response. Our interface analyzes each speech query, convert speech to text and extract keywords from the text. Later, these keywords are
(ANEW), that are rated also in A-V dimensions, as keywords and apply latent semantics analysis (LSA) on those words and words in the clips to estimate A-V values in the clips. Finally, the results of acoustic and semantic parts are combined. We show that combining semantic and acoustic information for dimensional speech
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.