The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In the ensemble learning methods for training individual learners in a committee machine, two learning items should be optimized, including minimization of both the squared difference between the target and the learner's output and the estimated correlation between the learner and the rest of learners in the ensemble. The first term is to force each learner to learn the given data. The second term...
Negative correlation learning is an ensemble learning approach that is able to create negatively correlated learners simultaneously and cooperatively in a committee machine. One problem in negative correlation learning is that the learning error functions are defined in the same way for all individual learners. Learners have little choice in making their own decisions on how to learn a given data...
Negative correlation learning has been proposed to create a set of negatively correlated artificial neural networks (ANNs) in a committee machine. In negative correlation learning, the error signals for each ANN on a given data are not only decided by the error differences between the output of ANN and the targets. Two terms are optimized at the same time. The first one is to minimize the error between...
Two different implementations of negative correlation learning with λ > 1 are discussed in this paper. In the first implementation, every learner is forced to learn to be different to the ensemble on every data point no matter what have been learned by the ensemble and itself. In the second implementation, every learner is selectively to learn to be different to the ensemble on every data point...
Different to independent and sequential learning, negative correlation learning trains all learners in an ensemble simultaneously and cooperatively with direct interactions. In negative correlation learning, each learner can be learned by the error signals only based on the differences between the output of the ensemble and the target output on a given example without considering whether itself has...
Different to other re-sampling ensemble learning, negative correlation learning trains all individual models in an ensemble simultaneously and cooperatively. In negative correlation learning, each individual could see all training data, and adapt its target function based on what the rest of individuals in the ensemble have learned. In this paper, two error bounds are introduced in negative correlation...
Two error bounds were introduced in the learning process of balanced ensemble learning. They are the lower bound of error rate (LBER) and the upper bound of error output (UBEO) on the training set, respectively. These two error bounds would decide whether a training data point should be further learned or not after balanced ensemble learning has reached certain stage. Before the error rates are higher...
Ensemble learning system could lessen the degree of overfitting that often appear in the supervised learning process for a single learning model. However, overfitting had still been observed in negative correlation learning that is an ensemble learning method with correlation-based penalty. Two constraints were introduced into negative correlation learning in order to conquer such overfitting. One...
It has been proved that there is a bias-variance-covariance trade-off among the trained neural network ensembles. In this paper, extra learning on random data points was proposed to control the variations of the correlations in the negative correlation learning (NCL). Without the control of the correlations, NCL might have arbitrary values on the unknown data points after learning too much on the...
In neural network learning, it has been often observed that some data have been learned extremely well while others have been barely learned. Such unbalanced learning often lead to the learned neural networks or neural network ensembles that could be too strongly biased on those learned-well data. The stronger bias could contribute to the larger variance and the poorer generalization on the unseen...
In this paper, an intelligent prediction approach based on the neural networks rough set of a Genetic Selection Strategy Particle Swarm Optimization Algorithm(GSS-PSO) is proposed to measure the risky area caused by slope. With this approach, the attribute reduction method based on neighborhood rough set is adopted to conduct the attribute reduction, then the genetic strategy is used to reform the...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.