The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Normal support vector machine (SVM) algorithms are not suitable for classification of large data sets because of high training complexity. This paper introduces a novel SVM classification approach for large data sets. It has two phases. In the first phase, an approximate classification is obtained by SVM using fast clustering techniques to select the training data from the original data set. In the...
When the ship is damaged after weapon attack, it is necessary for commanders to recognise its unsinkability grade quickly. Through unsinkability classification, we can know whether the ship will sink or not and its sinking probability. The unsinkability classification is a N-class pattern recognition problem. The fuzzy support vector machine (FSVM) is used to distinguish a certain unsinkability grade...
Support vector machine (SVM) considers all data points with the same importance in classification problems, therefore SVM is very sensitive to noisy data or outliers. Current fuzzy approach to two-class SVM introduces a fuzzy membership to each data point in order to reduce the sensitivity of less important data, however computing fuzzy memberships is still a challenge. It has been found that the...
In this paper, we propose a cooperative learning algorithm for Multi-category classification which is decomposed into two sub-optimization problems by using the support vector machine technique. The proposed cooperative learning algorithm consists of two single learning algorithms and each sub-optimization problem is solved by one of them. Unlike the cooperative neural network, the proposed cooperative...
We present a new method for the incremental training of multiclass Support Vector Machines that provides computational efficiency for training problems in the case where the training data collection is sequentially enriched and dynamic adaptation of the classifier is required. An auxiliary function that incorporates some desired characteristics in order to provide an upper bound of the objective function...
The standard 2-norm support vector machine (SVM for short) is known for its good performance in classification and regression problems. In this paper, the 1-norm support vector machine is considered and a novel smoothing function method for Support Vector Classification(SVC) and Regression (SVR) are proposed in an attempt to overcome some drawbacks of the former methods which are complex, subtle,...
The problem of classification on highly imbalanced datasets has been studied extensively in the literature. Most classifiers show significant deterioration in performance when dealing with skewed datasets. In this paper, we first examine the underlying reasons for SVM's deterioration on imbalanced datasets. We then propose two modifications for the soft margin SVM, where we change or add constraints...
The sparse representation-based classifier (SRC) has been developed and shows great potential for pattern classification. This paper aims to gain a discriminative projection such that SRC achieves the optimum performance in the projected pattern space. We use the decision rule of SRC to steer the design of a dimensionality reduction method, which is coined the sparse representation classifier steered...
Extracting acronyms and their expansions from plain text is an important problem in text mining. Previous research shows that the problem can be solved via machine learning approaches. That is, converting the problem of acronym extraction to binary classification. We investigate the classification problem and find that the classes are highly unbalanced (the positive instances are very rare compared...
Support vector machine (SVM) algorithm has shown a good learning ability and generalization ability in classification, regression and forecasting. This paper mainly analyzes the the performance of support vector machine algorithm in the classification problem, including the algorithm in the kernel function selection, parameter optimization, and integration of other algorithms and to deal with multi-classification...
Nearest neighbor is one of the most successfully used techniques for performing classification and pattern recognition tasks. Its simplicity and effectiveness justify the use of this technique in certain domains but it however presents several drawbacks referring to time response, noise sensitivity and storage requirements. Several solutions have been proposed in order to alleviate these problems,...
Error-Correcting Output Codes (ECOC) reveal a common way to model multi-class classification problems. According to this state of the art technique, a multi-class problem is decomposed into several binary ones. Additionally, on the ECOC framework we can apply the subclass technique (sub-ECOC), where by splitting the initial classes of the problem we create larger but easier to solve ECOC configurations...
A multi-kernel Support Vector Machine model, called Hierarchical Support Vector Regression (HSVR), is proposed here. This is a self-organizing (by growing) multiscale version of a Support Vector Regression (SVR) model. It is constituted of hierarchical layers, each containing a standard SVR with Gaussian kernel, at decreasing scales. HSVR have been applied to a noisy synthetic dataset. The results...
Data mining is an important process, with applications found in many business, science and industrial problems. While a wide variety of algorithms have already been proposed in the literature for classification tasks in large data sets, and the majority of them have been proven to be very effective, not all of them are flexible and easily extensible. In this paper, we introduce two new approaches...
A new text classification algorithm which is based on Ant Colony Algorithm is proposed in this paper. It makes use of the advantage in solving discrete problems by ACO and discreteness of text documents' features. Texts are classified by crawling of class population ants which have class information with them to find an optimal path matching it during iteration in the algorithm. It can get a satisfactory...
In a sparse-representation-based face recognition scheme, the desired dictionary should have good representational power (i.e., being able to span the subspace of all faces) while supporting optimal discrimination of the classes (i.e., different human subjects). We propose a method to learn an over-complete dictionary that attempts to simultaneously achieve the above two goals. The proposed method,...
Supervised learning uses a training set of labeled examples to compute a classifier which is a mapping from feature vectors to class labels. The success of a learning algorithm is evaluated by its ability to generalize, i.e., to extend this mapping accurately to new data that is commonly referred to as the test data. Good generalization depends crucially on the quality of the training set. Because...
This paper presents a minimum classification error (MCE) training approach for improving the accuracy of multi-class support vector machine (SVM) classifiers. We have applied this approach to topic identification (topic ID) for human-human telephone conversations from the Fisher corpus using ASR lattice output. The new approach yields improved performance over the traditional techniques for training...
Minimum Classification Error (MCE) training, which can be used to achieve minimum error classification of various types of patterns, has attracted a great deal of attention. However, to increase classification robustness, a conventional MCE framework has no practical optimization procedures like geometric margin maximization in Support Vector Machine (SVM). To realize high robustness in a wide range...
Traditional k-NN classifier poses many limitations including that it does not take into account each class distribution, importance of each feature, contribution of each neighbor, and the number of instances for each class. A Differential evolution (DE) optimization technique is utilized to enhance the performance of k-NN through optimizing the metric weights of features, neighbors and classes. Several...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.