The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Grandmother cell is a term in neuroscience to imitate the simplistic notion that the brain has a separate neuron to represent every familiar face, with important properties of sparseness and invariance. This paper proposes a linear regression based classification model for face recognition, which learn a mapping from the training feature vectors to the grandmother-cell-like codes, with one unit corresponding...
We propose a new criterion for discriminative dimension reduction, Max-K-Min Distance Analysis (MKMDA). Given a data set with C classes, MKMDA maximizes the sum of the K minimum pair wise distance of these C classes on the selected one-dimensional subspace. The set of the possible one-dimensional subspace, for which the order of the projected class centroids is identical, define a convex region with...
Sparse Representation-Based Classification (SRC) is a face recognition breakthrough in recent years which has successfully addressed the recognition problem with sufficient training images of each gallery subject. In this paper, we extend SRC to applications where there are very few, or even a single, training images per subject. Assuming that the intraclass variations of one subject can be approximated...
Active Learning (AL) is designed to aid the labor-intensive process of training acoustic model for speech recognition. In AL, only the most informative training samples are selected for manual annotation. Thus, how to evaluate the unlabeled samples is worth researching. In this paper, we propose a unified framework to generate confusion networks of multiple levels including character, syllable and...
In this paper, we propose a learning method for fast training of support vector machines (SVMs). First, we divide the two-class training samples into two sets according to the labels. Secondly, the two set one-class samples are trained by using one-class SVM (OCSVM) respectively, and we get two set support vectors (SVs). Finally, the two set SVs are combined into a set of two-class training samples...
In this paper, we face a new challenge that the filter is expected to converge much faster, e.g. within 10 labeled SMSs or less. Topic model based dimension reduction can minimize the structural risk with limited training data. But dimension reduction will go against the completeness of feature space. It is very difficult to obtain the convergence rate and the completeness at the same time only by...
In this paper, we present a large scale offline handwritten Chinese character database-HCL2000 which will be made public available for the research community. The database contains 3,755 frequently used simplified Chinese-characters written by 1,000 different subjects. The writerspsila information is incorporated in the database to facilitate testing on grouping writers with different background such...
In this paper, we propose a new method to model the manifold of handwritten Chinese characters using the local discriminant projection. We utilize a cascade framework that combines global similarity with local discriminative cues to recognize Chinese characters. We find the similarity of different characters using a nearest-neighbor (NN) classifier, and followed by the Local Discriminant Projection...
A new method based on model selection for acoustic model training is proposed .The MPE trained model and the MLE trained model is used for model selection for the following training. The selection criteria is based on the ratio of the inter-variance to the intra-variance of each model. Besides we also propose a cluster method for the model in order to get the accuracy information for the weight calculation...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.