The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The prediction precision of mathematical models and their interpretability go usually against each other. The increase of the quality of one of the features decreases the other. In this article we introduce a new mathematical model based on Perception-based Logical Deduction (see [18], [19]) which is an implicative fuzzy inference mechanism based on linguistics semantics, and which enables the users...
In the proposed approach, an attempt was made to disambiguate Bengali ambiguous words using Naïve Bayes Classification algorithm. The whole task was divided into two modules. Each module executes a specific task. In the first module, the algorithm was applied on a regular text, collected from the Bengali text corpus developed in the TDIL project of the Govt. of India and the accuracy of disambiguation...
The natural language processing became one of the most important fields of artificial intelligence because is related to the area of human-computer interaction using human languages (natural language generation, question answering, machine translation, etc.) or speech understanding (language modeling).To model the relations between words it is necessary to find the syntactic and semantic relations...
prosodic focus, using a paradigm based on digit strings, in which the same material and discourse contexts can be used in different languages. We found a striking difference between languages like English and Mandarin Chinese, where prosodic focus is clearly marked in production and accurately recognized in perception, and languages like Korean, where prosodic focus is neither clearly marked in production...
Electronic detection of linguistic negation in free text is a challenging need for many text handling applications including sentiment analysis. Our system uses online news archives from two different resources namely NDTV and The Hindu to predict the scope of negation in the text. In this paper, our main target was on determining the scope of negation in news articles for two political parties namely...
We present here a data mining approach for part-of-speech (POS) tagging, an important Natural language processing (NLP) classification task. We propose a semi-supervised associative classification method for POS tagging. Existing methods for building POS taggers require extensive domain and linguistic knowledge and resources. Our method uses a combination of a small POS tagged corpus and untagged...
This paper reports on an approach that contributes towards the problem of discovering fuzzy association rules that exhibit a temporal pattern. The novel application of the 2-tuple linguistic representation identifies fuzzy association rules in a temporal context, whilst maintaining the interpretability of linguistic terms. Iterative Rule Learning (IRL) with a Genetic Algorithm (GA) simultaneously...
Fuzzy rule-based systems (FRBS) use the principle of fuzzy sets and fuzzy logic to describe vague and imprecise statements and provide a facility to express the behaviours of the system with a human-understandable language. Fuzzy information, once defined by a fuzzy system, is fixed regardless of the circumstances and therefore makes it very difficult to capture the effect of context on the meaning...
This paper proposes a lattice-based method for keyword spotting in online Chinese handwriting to improve the trade-off between accuracy and speed, and to overcome the out-of-vocabulary (OOV) problem of lexicon-driven approach. Using a character string recognition algorithm, the lattice-based method generates a candidate lattice of N-best list. We observe that search multiple candidate strings reduces...
Component selection based on quality properties is a fuzzy process because measurable component attributes cannot be attributed with certainty to high-level quality properties such as the ones proposed by the ISO/IEC 9126 quality model and other similar models. In addition, measurable component quality attributes can be characterized differently for different application domains (e.g., a total execution...
Obtaining high degree of Interpretability and Accuracy in the design of complex high dimensional fuzzy systems is contradictory in nature. It is well known by Interpretability-Accuracy (I-A) Trade-Off. Several methods have been proposed to find good I-A Trade-Off. These includes Multi objective Optimization, Context Adaptation, Hierarchical Fuzzy Modeling and other issues related to fuzzy partition,...
Pursuing on the analysis of product reviews, an unsupervised product features categorization method is proposed. Morphemes as smallest linguistic meaningful unit are induced in measuring the intra relationship among product features instead of words. Opinion words around product features are chosen to represent the inter relationship among product features instead of full context information. The...
We present a tool that facilitates the efficient extension of morphological lexica. The tool exploits information from a morphological lexicon, a morphological grammar and a text corpus to guide the acquisition process. In particular, it employs statistical models to analyze out-of-vocabulary words and predict lexical information. These models do not require any additional labeled data for training...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.