The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We propose a conjugate descent procedure for the second order SMO algorithm that leads to a substantial decrease in the number of iterations required for SMO to converge to a given precision with a modest increase in the iteration cost.
In this paper, inspired by sparse principal component analysis (SPCA) via the elastic net regularization, we propose a new criterion for sparsification of the kernel principal component analysis (KPCA) with the elastic net regularization that can simultaneously consider the data approximation and sparsification. We first show that KPCA also can be relaxed into a regression framework optimization problem,...
In this work we propose a classification framework called class-wise deep dictionary learning (CWDDL). For each class, multiple levels of dictionaries are learnt using features from the previous level as inputs (for first level the input is the raw training sample). It is assumed that the cascaded dictionaries form a basis for expressing test samples for that class. Based on this assumption sparse...
In this paper, we study the performance guarantee of weighted ℓ1-constrained quadratic programming in recovering the support of a sparse signal from a few linear measurements. A new sufficient condition for the success of weighted ℓ1-constrained quadratic programming is derived. Further, we demonstrate its applications in two typical weighted ℓ1 models: Firstly, a theoretical result on the modified-BPDN...
In most sparse coding based subspace clustering problems, using the non-convex lp-norm minimization (0 < p < 1) can often deliver better results than using the convex l1-norm minimization. In this paper, we propose a sparse subspace clustering via joint lp-norm and l2,p-norm minimization, where the lp-norm imposed on sparse representations can achieve more sparsity for clustering while l2,p-norm...
In one-class classification problems, a model is synthesized by using only information coming from the nominal state of the data generating process. Many important applications can be cast in the one-class classification framework, such as anomaly and change in stationarity detection, and fault recognition. In this paper, we present a novel design methodology for one-class classifiers derived from...
We propose an expectation-maximization-like(EM-like) method to train Boltzmann machine with unconstrained connectivity. It adopts Monte Carlo approximation in the E-step, and replaces the intractable likelihood objective with efficiently computed objectives or directly approximates the gradient of likelihood objective in the M-step. The EM-like method is a modification of alternating minimization...
A new method of training deep neural networks including the convolutional network is proposed. The method deconvexifies the normalized risk-averting error (NRAE) gradually and switches to the risk-averting error (RAE) whenever RAE is computationally manageable. The method creates tunnels between the depressed regions around saddle points, tilts the plateaus, and eliminates nonglobal local minima....
k-anonymization is a basic technique for utilizing sensitive information in data mining without violating personal privacy, and can be efficiently achieved by a greedy k-member clustering, where each data record is coded reflecting cluster structures so that each anonymized record is indistinguishable from at least other k − 1 records. In this paper, with the goal of utilizing cooccurrence information,...
Traditional supervised machine learning tests the learned classifiers on data which are drawn from the same distribution as the data used for the learning. In practice, this hypothesis does not always hold and the learned classifier has to be transferred from the space of learning data (also called source data) to the space of test data (also called target data) where it is not directly applicable...
In this paper, we propose a l2,1-norm based discriminative robust transfer learning (DKTL) method for domain adaptation tasks. The key idea is to simultaneously learn discriminative subspaces by using the proposed domain-class-consistency (DCC) metric, and the representation based robust transfer model between source domain and target domain via l21-norm minimization. The DCC metric includes two parts:...
This work proposes to learn autoencoders with sparse connections. Prior studies on autoencoders enforced sparsity on the neuronal activity; these are different from our proposed approach - we learn sparse connections. Sparsity in connections helps in learning (and keeping) the important relations while trimming the irrelevant ones. We have tested the performance of our proposed method on two tasks...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.