The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This paper discusses how to apply the ensemble learning for the individual learners on the randomly splitting data. Rather than letting the individual learners learn independently on the different subsets, it would be better for the individual learners to learn cooperatively by exchanging the learned values. In this way, the individual learners could learn the whole given data together while they...
This paper proposes a hybrid negative correlation learning in which each individual neural network in an neural network ensemble would either learn a data point by negative correlation learning or learn to be different to the neural network ensemble. The implementation is through randomly splitting the training set into two subsets for each individual neural network in learning. On one subset of the...
It is certain that the individual learners should be different from each other in order for a committee machine to reach the better performance. However, differences alone among the individual learners are not enough for the committee machine to predict well on the unknown data. It would be essential for each individual learner to be able to decide whether to learn to be different or not to the other...
Different to independent and sequential learning, negative correlation learning trains all learners in an ensemble simultaneously and cooperatively with direct interactions. In negative correlation learning, each learner can be learned by the error signals only based on the differences between the output of the ensemble and the target output on a given example without considering whether itself has...
Different to other re-sampling ensemble learning, negative correlation learning trains all individual models in an ensemble simultaneously and cooperatively. In negative correlation learning, each individual could see all training data, and adapt its target function based on what the rest of individuals in the ensemble have learned. In this paper, two error bounds are introduced in negative correlation...
Two error bounds were introduced in the learning process of balanced ensemble learning. They are the lower bound of error rate (LBER) and the upper bound of error output (UBEO) on the training set, respectively. These two error bounds would decide whether a training data point should be further learned or not after balanced ensemble learning has reached certain stage. Before the error rates are higher...
Balanced ensemble learning is developed from negative correlation learning by shifting the learning targets. Compared to the negative correlation learning, balanced ensemble learning is able to learn faster and achieve the higher accuracy on the training sets for a number of the tested classification problems. However, it has been found that the higher accuracy balanced ensemble learning obtained...
Ensemble learning system could lessen the degree of overfitting that often appear in the supervised learning process for a single learning model. However, overfitting had still been observed in negative correlation learning that is an ensemble learning method with correlation-based penalty. Two constraints were introduced into negative correlation learning in order to conquer such overfitting. One...
It has been proved that there is a bias-variance-covariance trade-off among the trained neural network ensembles. In this paper, extra learning on random data points was proposed to control the variations of the correlations in the negative correlation learning (NCL). Without the control of the correlations, NCL might have arbitrary values on the unknown data points after learning too much on the...
It has been shown that as the number of weak learners in a majority voting model is increased so does its generalization if those weak learners are uncorrelated or negatively correlated. Although some learning algorithms including bagging and boosting have been developed to create such weak learners, learners trained by these learning algorithms are actually not so weak in many applications. This...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.