The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this paper, we have proposed two novel optimization methods for discriminative training (DT) of hidden Markov models (HMMs) in speech recognition based on an efficient global optimization algorithm used to solve the so-called trust region (TR) problem, where a quadratic function is minimized under a spherical constraint. In the first method, maximum mutual information estimation (MMIE) of Gaussian...
In this paper, we have proposed a new method to construct an auxiliary function for the discriminative training of HMMs in speech recognition. The new auxiliary function serves as a first-order approximation of the original objective function but more importantly it remains as a lower bound of the original objective function as well. Furthermore, the trust region (TR) method in [1] is applied to find...
In this paper, we study a new semidefinite programming (SDP) formulation to improve optimization efficiency for large margin estimation (LME) of HMMs in speech recognition. We re-formulate the same LME problem as smaller-scale SDP problems to speed up the SDP-based LME training, especially for large model sets. In the new formulation, instead of building the SDP problem from a single huge variable...
Recently, we proposed a novel optimization algorithm called constrained line search (CLS) to train Gaussian mean vectors of HMMs in the MMI sense. In this paper, we extend and re-formulate it in a more general framework. The new CLS can optimize any discriminative objective functions including MMI, MCE, MPE/MWE etc. Also, closed-form solutions to update all Gaussian mixture parameters, including means,...
In this paper, we study how to incorporate training errors in large margin estimation (LME) under semi-definite programming (SDP) framework. Like soft-margin SVM, we propose to optimize a new objective function which linearly combines the minimum margin among positive tokens and an average error function of all negative tokens. The new method is named as soft-LME. It is shown the new soft-LME problem...
In this paper, we propose a novel constrained line search to optimize the MMEE objective function for training discriminative HMMs. In our method, the MMI estimation is cast as a constrained maximization problem, where Kullback-Leibler divergence between models before and after parameters adjustment is introduced as a constraint during optimization. Then, based on the idea of line search, we show...
We propose to use minimum divergence, where acoustic similarity between HMMs is characterized by Kullback-Leibler divergence, for discriminative training. The MD objective function is defined as a posterior weighted divergence measured over the whole training set. Different from our earlier work, where KLD-based acoustic similarity is pre-computed for all initial models and stays invariant in the...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.