The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Vehicle logo recognition is an important part of vehicle identification in intelligent transportation systems. State-of-the-art vehicle logo recognition approaches typically consider training models on large datasets. However, there might only be a small training dataset to start with and more images can be obtained during the real-time applications. This paper proposes an online image recognition...
This paper develops a distributed stochastic subgrandient-based support vector machine algorithm when training data to train support vector machines are distributed in the network. In this situation, all the data are decentralized stored and unavailable to all agents and each agent has to make its own update based on its computation and communication with neighbors. With mild connectivity conditions,...
P300-based brain-computer interface (BCI) is one of the most common BCIs. Due to the characteristics of P300 responses vary from person to person, it leads to the necessity of collecting much labeled data from each user and the problem of time-consuming in many applications. In this work, a transfer learning method which dynamically adjusts the weights of instances is applied to improve the P300-based...
Classification is at the very center of the supervised learning. In this work, we propose a novel algorithm to classify the test data set with the aid of a vector field, emanating from the training data set. In particular, the vector field is constructed such that the location of each training data point becomes a local minimum of the potential. The test data points are allowed to evolve under the...
It is well-known that the precision of data, weight vector, and internal representations employed in learning systems directly impacts their energy, throughput, and latency. The precision requirements for the training algorithm are also important for systems that learn on-the-fly. In this paper, we present analytical lower bounds on the precision requirements for the commonly employed stochastic gradient...
In this paper, we use a combination of support vector machine to improve the Standard SVM, which combine different kernel functions to improve the SVM' learning ability and generalization ability, thereby improving the performance of a combination SVM kernel function, and avoiding the assertiveness of the single prediction model. Combination forecasting model to make joint decisions on the results,...
Sparsity-inducing penalties are useful tools in variational methods for machine learning. In this paper, we propose two block-coordinate descent strategies for learning a sparse multiclass support vector machine. The first one works by selecting a subset of features to be updated at each iteration, while the second one performs the selection among the training samples. These algorithms can be efficiently...
Anomaly detection (AD) involves detecting abnormality from normality and has a wide spectrum of applications in reality. Kernel-based methods for AD have been proven robust with diverse data distributions and offering good generalization ability. Stochastic gradient descent (SGD) method has recently emerged as a promising framework to devise ultra-fast learning methods. In this paper, we conjoin the...
The Support Vector Machines (SVMs) dual formulation has a non-separable structure that makes the design of a convergent distributed algorithm a very difficult task. Recently some separable and distributable reformulations of the SVM training problem have been obtained by fixing one primal variable. While this strategy seems effective for some applications, in certain cases it could be weak since it...
Past research on Multitask Learning (MTL) has focused mainly on devising adequate regularizers and less on their scalability. In this paper, we present a method to scale up MTL methods which penalize the variance of the task weight vectors. The method builds upon the alternating direction method of multipliers to decouple the variance regularizer. It can be efficiently implemented by a distributed...
Support vector machine has obtained more and more attentions as a new method of machine learning based on the statistic learning theory. At the same time, there are increasing concerns about the fault diagnosis for practical engineering systems. Firstly, many kinds of SVM algorithms will be introduced, such as LS-SVM, LSVM and PSVM and so on. Besides, the advantages and disadvantage of those methods...
The recently introduced Support Vector Method (SVM) is one of the most powerful methods for training a Radial Basis Function (RBF) filter in a batch mode. This paper proposes a modification of this method for on-line adaptation of the filter parameters on a block-by-block basis. The proposed method requires a limited number of computations and compares well with other adaptive RBF filters.
We present a new decomposition algorithm for training bound-constrained Support Vector Machines in this paper. When selecting indices into the working set, only first order derivative information of the objective function in the optimization model is required. Therefore, the resulting working set selection strategy is simple and can be implemented easily. The new algorithm is proved to be global convergent...
This contribution extends linear classifiers to sub-linear classifiers for graphs and analyzes their properties. The results are (i) a geometric interpretation of sub linear classifiers, (ii) a generic learning rule based on the principle of empirical risk minimization, (iii) a convergence theorem for the margin perceptron in the separable case, and (iv) the VC-dimension of sub linear functions. Empirical...
Dual decomposition methods are the current state-of-the-art for training multiclass formulations of Support Vector Machines (SVMs). At every iteration, dual decomposition methods update a small subset of dual variables by solving a restricted optimization problem. In this paper, we propose an exact and efficient method for solving the restricted problem. In our method, the restricted problem is reduced...
We propose a gossip-based mini-batch random projection (GMRP) algorithm that can reduce communication overhead for a distributed optimization problem defined over a network with a very large number of constraints. We state a convergence result and provide an application of the GMRP, text classification with support vector machines.
Elastic Net Regularizers have shown much promise in designing sparse classifiers for linear classification. In this work, we propose an alternating optimization approach to solve the dual problems of elastic net regularized linear classification Support Vector Machines (SVMs) and logistic regression (LR). One of the sub-problems turns out to be a simple projection. The other sub-problem can be solved...
This paper presents an iterative classification algorithm called Ridge-adjusted Slack Variable Optimization (RiSVO). RiSVO is an iterative procedure with two steps: (1) A working subset of the training data is selected so as to reject “extreme” patterns. (2) the decision vector and threshold value are obtained by minimizing the energy function associated with the slack variables. From a computational...
Considering the Particle Swam Optimization (PSO) is easily relapsing into local extremum, an improved PSO(IPSO) is proposed in this paper. In the new algorithm, we apply the evolution speed factor as the trigger conditions to stochastically disturb the local optimal solution. The IPSO algorithm can not only improve extraordinarily the convergence velocity in the evolutionary optimization, but also...
Given n nominal samples, a query point η and a significance level a, the uniformly most powerful test for anomaly detection can be to test p(η) ≤ α, where p(η) is the p-value function of η. In [1] a p-value estimator is proposed which is based on ranking some statistic over all data samples, and is shown to be asymptotically consistent. Relying on this framework we propose a new statistic for p-value...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.