The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this paper we proposed a new algorithm for neural network training. This algorithm is developed from modification on Levenberg-Marquardt algorithm for MLP neural network learning. The proposed algorithm has good convergence. This method reduces the amount of oscillation in learning procedure. We named this algorithm as GK-LM Method. An example is given here to show usefulness of this method. Finally...
This paper investigates the performance of conjugate gradient algorithms with sliding-window approach for training multilayer perceptron (MLP). Online learning is implemented when the system under investigation is time varying or when it is not convenient to obtain a full history of offline data about the system variables. Sliding window framework is proposed to combine the robustness of offline learning...
Multilayer feed-forward neural network is widely used based on minimization of an error function. Back propagation is a famous training method used in the multilayer networks but it often suffers from the problems of local minima and slow convergence. These problems take place due to the gradient behavior of mostly used sigmoid activation function (SAF). Weight update becomes zero when activation...
The back-propagation (BP) network is widely recognized as a powerful training tool of the multilayer neural networks (MLNNs). Usually it suffers from a slow convergence rate and often results in local minimums, since it applies the steepest descent method to update the network weights. A variety of related algorithms have been introduced to address that problem. Levenberg-Marquardt algorithm is one...
This paper presents the optimization of one-hidden layer artificial neural network (ANN) design using evolutionary programming (EP) for predicting the energy output of a grid-connected photovoltaic system installed at Malaysian Energy Centre (PTM), Bangi, Malaysia. In this study, the architecture and training parameters of the multi-layer feedforward back-propagation ANN model had been optimized while...
Saturation conditions of the hidden layer neurons are a major cause of learning retardation in multilayer perceptrons (MLP). Under such conditions the traditional backpropagation (BP) algorithm is trapped in local minima. To renew the search for a global minimum, we need to detect the traps and an offset scheme to avoid them. We have discovered that the gradient norm drops to a very low value in local...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.