The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
With AdaBoost being constructed, base classifiers are more and more concentrating on instances which are difficult to classify. And base classifiers are only favourite to these instances. After a base classifier is constructed, it's voting weight for final decision is determined and be the same to all test instances no matter which class a test instance belongs to. Considering these problems, the...
“Gain-Based Separation” is a novel heuristic that modifies the standard multiclass decision tree learning algorithm to produce forests that can describe an example or object with multiple classifications. When the information gain at a node would be higher if all examples of a particular classification were removed, those examples are reserved for another tree. In this way, the algorithm performs...
K-Nearest-Neighbor (KNN) as an important classification method based on closest training examples has been widely used in data mining due to its simplicity, effectiveness, and robustness. However, the class probability estimation, the neighborhood size and the type of distance function confronting KNN may affect its classification accuracy. Many researchers have been focused on improving the accuracy...
This paper presents an extension to the Rule-Based Similarity (RBS) model a novel rough set approach to the problem of learning a similarity relation from data. The original model, proposed in [1], applied the notion of Tversky's feature contrast model in a rough set framework to facilitate an accurate case-based classification. In the dynamic RBS model, a dynamic reducts technique is used to broaden...
In this paper classification of data mining based on radial basis function neural networks is researched. After intensive analysis, the training algorithm of radial basis function neural networks is improved in optimum structure, learning speed and approximation accuracy. In learning speed, two-stage learning strategy is used to accelerate the learning process. In approximation accuracy, an error-correction...
This paper investigates an interesting question of solving incremental learning problems using ensemble algorithms. The motivation is to help classifiers learn additional information from new batches of data incrementally while preserving previously acquired knowledge. Experimental results show that the proposed dynamic weighting scheme can achieve better performance compared to the fixed weighting...
A path planning algorithm of robot is proposed based on ensemble algorithm of the learning classifier system, which design fitness function in dynamic environment. The paper derived and proved that ensemble algorithm is convergence and provided a theoretical guarantee for the path planning algorithm. Simulation results also showed that genetic algorithms and learning classifier system combination...
The approximation accuracy of RBF network constructed by the incremental learning algorithm to the target was not high. For function approximation or other requirements of high accuracy, such accuracy of RBF network model can not meet the requirements. We have improved this network model focused on three aspects to improve the bottleneck, and have an experiment and comparatively analyze these improvements...
We have recently introduced an incremental learning algorithm, called Learn++.NSE, designed for Non-Stationary Environments (concept drift), where the underlying data distribution changes over time. With each dataset drawn from a new environment, Learn++.NSE generates a new classifier to form an ensemble of classifiers. The ensemble members are combined through a dynamically weighted majority voting,...
In order to select the most predictive features from network sample data for fault diagnosis, a novel adaptive immune clonal selection algorithm (AICSA) is proposed. By simulating the mechanisms of biological immune system such as immune memory, clone selection and self-adaptation, AICSA achieves the dynamic control of evolution process, which realizes global optimal computing combined with the local...
Recently, traffic classification (TC) becomes more and more important for network management and measurement tasks. The new-coming machine learning based classification methods can achieve high classification accuracy and fast identification ability; however, all these related TC methods up to now always have the assumption of the stability of classification model constituted from network traffic...
We outline an incremental learning algorithm designed for nonstationary environments where the underlying data distribution changes over time. With each dataset drawn from a new environment, we generate a new classifier. Classifiers are combined through dynamically weighted majority voting, where voting weights are determined based on classifiers?? age and accuracy on current and past environments...
We describe an ensemble of classifiers based approach for incrementally learning from new data drawn from a distribution that changes in time, i.e., data obtained from a nonstationary environment. Specifically, we generate a new classifier using each additional dataset that becomes available from the changing environment. The classifiers are combined by a modified weighted majority voting, where the...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.