The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We propose a framework for learning generalized additive models at very little additional cost (a small constant) compared to some of the most efficient schemes for learning linear classifiers such as linear SVMs and regularized logistic regression. We achieve this through a simple feature encoding scheme followed by a novel approach to regularization which we term ``generalized lasso''. Addtive models...
In order to exploit the informative components hidden in nonnegative matrix factorization, an information theoretic learning method, termed ITNMF, is presented. Different from the existing NMF methods, the proposed method is able to handle the general objective optimization, and takes the conjugate gradient technique to enhance the iterative optimization. To tackle the null matrix factorization problem,...
We propose a novel linear discriminant analysis method and demonstrate its superiority over existing linear methods. Based on information theory, we introduce a non-parametric estimate of mutual information with variable kernel bandwidth. Furthermore, we derive a gradient-based optimization algorithm for learning the optimal linear reduction vectors which maximizes the mutual information estimate...
The performance of popular and classical k-nearest neighbor classifier depends on the distance metric. Large margin nearest neighbor classifier using gradient optimization method is prone to local minima. In this paper, we present a Mahalanobis metric learning method based on cutting plane algorithm which reduces largely constraints for solving the semidefinite programming problem. Experimental results...
We propose a modified discrete HMM that handles multimodalities. We assume that the feature space is partitioned into subspaces generated by different sources of information. To combine these heteregoneous modalities we propose a multi-stream discrete HMM that assigns a relevance weight to each subspace. The relevance weights are set local and depend on the symbols and the states. In particular, we...
Representation learning is a fundamental challenge for feature selection and plays an important role in applications such as dimension reduction, data mining and object recognition. Traditional linear representation methods, such as principal component analysis (PCA), independent component analysis (ICA) and linear discriminate analysis (LDA), have good performance on certain applications based on...
Log-linear models are widely used for labeling feature vectors and graphical models, typically to estimate robust conditional distributions in presence of a large number of potentially redundant features. Limited-memory quasi-Newton methods like LBFGS or BLMVM are optimization workhorses for such applications, and most of the training time is spent computing the objective and gradient for the optimizer...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.