The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The performance of a single weak classifier can be improved by using combining techniques such as bagging, boosting and the random subspace method. When applying them to linear discriminant analysis, it appears that they are useful in different situations. Their performance is strongly affected by the choice of the base classifier and the training sample size. As well, their usefulness depends on...
AdaBoost boosts the performance of a weak learner by training a committee of weak learners which learn different features of the training sample space with different emphasis and jointly perform classification or regression of each new data sample by a weighted cumulative vote.We use RBF kernel classifiers to demonstrate that boosting a Strong Learner generally contributes to performance degradation,...
A communication model for the Hypothesis Boosting (HB) problem is proposed. Under this model, AdaBoost algorithm can be viewed as a threshold decoding approach for a repetition code. Generalization of such decoding view under theory of theory of Recursive Error Correcting Codes allows the formulation of a generalized class of low-complexity learning algorithms applicable in high dimensional classification...
This paper investigates a methodology for effective model selection of cost-sensitive boosting algorithms. In many real situations, e.g. for automated medical diagnosis, it is crucial to tune the classification performance towards the sensitivity and specificity required by the user. To this purpose, for binary classification problems, we have designed a cost-sensitive variant of AdaBoost where (1)...
This work proposes a novel method for constructing RBF networks, based on boosting. The task assigned to the base learner is to select a RBF, while the boosting algorithm combines linearly the different RBFs. For each iteration of boosting a new neuron is incorporated into the network. The method for selecting each RBF is based on randomly selecting several examples as the centers, considering...
In this paper, the error-reject trade-off of linearly combined multiple classifiers is analysed in the framework of the minimum risk theory. Theoretical analysis described in [12,13] is extended for handling reject option and the optimality of the error-reject trade-off is analysed under the assumption of independence among the errors of the individual classifiers. Improvements of the error-reject...
Amidst the conflicting evidence of superiority of one over the other, we investigate the Sum and majority Vote combining rules for the two class case at a single point. We show analytically that, for Gaussian estimation error distributions, Sum always outperforms Vote, whereas for heavy tail distributions Vote may outperform Sum.
We report the results from an experimental investigation on the complexity of data subsets generated by the Random Subspace method. The main aim of this study is to analyse the variability of the complexity among the generated subsets. Four measures of complexity have been used, three from [4]: the minimal spanning tree (MST), the adherence subsets measure (ADH), the maximal feature efficiency (MFE);...
For learning purposes, representations of real world objects can be built by using the concept of dissimilarity (distance). In such a case, an object is characterized in a relative way, i.e. by its dissimilarities to a set of the selected prototypes. Such dissimilarity representations are found to be more practical for some pattern recognition problems. When experts cannot decide for a single...
In previous work, we have confirmed the performance gains that can be obtained in speaker recognition by splitting the (clean) wide-band speech signal into several subbands, employing separate pattern classifiers for each subband, and then using multiple classifier fusion (‘recombination’) techniques to produce a final decision. However, our earlier work used fairly rudimentary recognition techniques...
In this paper we discuss classifier architectures to categorize time series. Three different architectures for the fusion of local classifier decisions are presented and applied to classify recordings of cricket songs. Different features from local time windows are extracted automatically from the waveform of the sound patterns. These features are used to classify the whole time series. We present...
There are problems in pattern recognition where the output of a system is a sequence of classes rather than a single class. A well-known example is handwritten sentence recognition. In order to make those problems amenable to classifier combination techniques, an algorithm for sequence alignment must be provided. The present paper describes such an algorithm. The algorithm extends an earlier method...
In many pattern recognition tasks, an approach based on combining classifiers has shown a significant potential gain in comparison to the performance of an individual best classifier. This improvement turned out to be subject to a sufficient level of diversity exhibited among classifiers, which in general can be assumed as a selective property of classifier subsets. Given a large number of classifiers,...
Support vector machines (SVM) are learning algorithms derived from statistical learning theory. The SVM approach was originally developed for binary classification problems. In this paper SVM architectures for multi-class classification problems are discussed, in particular we consider binary trees of SVMs to solve the multi-class pattern recognition problem. Numerical results for different classifiers...
Computer-based face perception is becoming increasingly important for many applications like biometric face recognition, video coding or multi-model human-machine interaction. Fast and robust detection and segmentation of a face in an unconstrained visual scene is a basic requirement for all kinds of face perception. This paper deals with the integration of three simple visual cues for the task of...
The veto effect caused by contradicting experts outputting zero probability estimates leads to fusion strategies performing sub optimally. This can be resolved using Moderation. The Moderation formula is derived for the k-NN classifier using a bayesian prior. The merits of moderation are examined on real data sets.
We introduce an algorithm for incrementaly constructing a hybrid network fo radial and perceptron hidden units. The algorithm determins if a radial or a perceptron unit is required at a given region of input space. Given an error target, the algorithm also determins the number of hidden units. This results in a final architecture which is often much smaller than an RBF network or a MLP. A benchmark...
Multiple classifier systems fall into two types: classifier combination systems and classifier choice systems. The former aggregate component systems to produce an overall classification, while the latter choose between component systems to decide which classification rule to use. We illustrate each type applied in a real context where practical constraints limit the type of base classifier which...
It is known that the Error Correcting Output Code (ECOC) technique can improve generalisation for problems involving more than two classes. ECOC uses a strategy based on calculating distance to a class label in order to classify a pattern. However in some applications other kinds of information such as individual class probabilities can be useful. Least Squares(LS) is an alternative combination strategy...
One of the main factors affecting the effectiveness of ECOC methods for classification is the dependence among the errors of the computed codeword bits. We present an extensive experimental work for evaluating the dependence among output errors of the decomposition unit of ECOC learning machines. In particular, we compare the dependence between ECOC Multi Layer Perceptrons (ECOC monolithic), made...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.