The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this paper we propose an automated method for generating domain specific stop words to improve classification of natural language content. Also we implemented a bayesian natural language classifier working on web pages, which is based on maximum a posteriori probability estimation of keyword distributions using bag
document. We think that our graph captures many properties of the text documents and can be used for different application in the field of text mining and NLP, such as keyword extraction and to know the nature of the document. Our approach to construct a semantic graph is independent of any language. We performed an
livelihoods, how to deal with its negative impacts, and which mitigation or adaptation policies to support. A line of related work has used bag of words and word-level features to detect frames automatically in text. Such works face limitations since standard keyword based features may not generalize well to accommodate surface
This paper suggests a SAO network for identifying technological opportunities by reusing inventive knowledge of patents. Despite the keyword-based approach's ease of use and simplicity, the approach is not sufficient for addressing the reuse of technological knowledge because it cannot represent how technological
In this paper, we designed a knowledge supporting software system in which sentences and keywords are extracted from large scale document database. This system consists of semantic representation scheme for natural language processing of the document database. Documents originally in a form of PDF are broken into
difficulty due to the large size of the list of words in a thesaurus. In this paper, we present a new method for solving the problem of text categorization over a corpus of newspaper articles where the annotation must be composed of thesaurus elements. The method consists of applying lemmatization, obtaining keywords and named
Nowadays growing number of popularization in the World Wide Web promotes e-learning via web. During e-learning the users can easily share, reuse, and organize the knowledge. Using the search engine the e-learners search the web pages by set of keywords. But the pages which are unrelated for our tags come frequently
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.