The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
To address the unconstrained optimization problem, the Conjugate Gradient Method (CG) uses the sequence of iterations to approach the minimum point of aim function. Because of the effect of rounding errors, many merits of CG are no longer in existence in practical use. Hence the rate of convergence is not ideal and a practical problem confronting us is how to improve conjugate gradient iteration so...
BP network training algorithm is based on the error gradient descent to modify weights, which leads to the inevitable problem of a local minimum point. Some researchers have presented some amending ways and made some remarkable achievements. But combining others algorithm for adjusting the weights of BP network is few. At present, a new evolution algorithm called as differential evolution is used...
The intent of this study is to provide an initial exploration of the metamodeling capabilities of two methods, i.e. neural network (NN) and Kriging approximation, in the context of simulation optimization. A total of four performance measures are adopted, and they describe different kinds of metamodel performance, such as ability to provide good starting points for gradient-based search, accuracy...
Fast convergence-rate, low computation complexity and good stability are important goals in the researching area of neural network learning algorithm. A kind of parallel computing lagged-start hybrid optimization algorithm is studied, it not only integrates the basic gradient method and the unconstrained optimization algorithm to realize the supplement of their advantages, but also makes full use...
A new fuzzy optimization neural network model is proposed based on the Levenberg-Marquardt (LM) algorithm on account of the disadvantages of slow convergence of traditional fuzzy optimization neural network model. In this new model, the gradient descent algorithm is replaced by the LM algorithm to obtain the minimum of output errors during network training, which changes the weights adjusting equations...
This paper further investigates the sub-gradient projection neural networks model for solving non- differentiable convex optimization problems proposed by Li et al. (2006). It is proved in this paper that when the initial points are belong to the constraint set or the initial points are not belong to the constraint set and the objective function is strictly convex, the network trajectories converge...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.