Serwis Infona wykorzystuje pliki cookies (ciasteczka). Są to wartości tekstowe, zapamiętywane przez przeglądarkę na urządzeniu użytkownika. Nasz serwis ma dostęp do tych wartości oraz wykorzystuje je do zapamiętania danych dotyczących użytkownika, takich jak np. ustawienia (typu widok ekranu, wybór języka interfejsu), zapamiętanie zalogowania. Korzystanie z serwisu Infona oznacza zgodę na zapis informacji i ich wykorzystanie dla celów korzytania z serwisu. Więcej informacji można znaleźć w Polityce prywatności oraz Regulaminie serwisu. Zamknięcie tego okienka potwierdza zapoznanie się z informacją o plikach cookies, akceptację polityki prywatności i regulaminu oraz sposobu wykorzystywania plików cookies w serwisie. Możesz zmienić ustawienia obsługi cookies w swojej przeglądarce.
In this paper we proposed a new algorithm for neural network training. This algorithm is developed from modification on Levenberg-Marquardt algorithm for MLP neural network learning. The proposed algorithm has good convergence. This method reduces the amount of oscillation in learning procedure. We named this algorithm as GK-LM Method. An example is given here to show usefulness of this method. Finally...
This paper investigates the performance of conjugate gradient algorithms with sliding-window approach for training multilayer perceptron (MLP). Online learning is implemented when the system under investigation is time varying or when it is not convenient to obtain a full history of offline data about the system variables. Sliding window framework is proposed to combine the robustness of offline learning...
Multilayer feed-forward neural network is widely used based on minimization of an error function. Back propagation is a famous training method used in the multilayer networks but it often suffers from the problems of local minima and slow convergence. These problems take place due to the gradient behavior of mostly used sigmoid activation function (SAF). Weight update becomes zero when activation...
The back-propagation (BP) network is widely recognized as a powerful training tool of the multilayer neural networks (MLNNs). Usually it suffers from a slow convergence rate and often results in local minimums, since it applies the steepest descent method to update the network weights. A variety of related algorithms have been introduced to address that problem. Levenberg-Marquardt algorithm is one...
This work presents system identification using neural network approaches for modelling a laboratory based twin rotor multi-input multi-output system (TRMS). Here we focus on a memetic algorithm based approach for training the multilayer perceptron neural network (NN) applied to nonlinear system identification. In the proposed system identification scheme, we have exploited three global search methods...
This paper presents the optimization of one-hidden layer artificial neural network (ANN) design using evolutionary programming (EP) for predicting the energy output of a grid-connected photovoltaic system installed at Malaysian Energy Centre (PTM), Bangi, Malaysia. In this study, the architecture and training parameters of the multi-layer feedforward back-propagation ANN model had been optimized while...
Saturation conditions of the hidden layer neurons are a major cause of learning retardation in multilayer perceptrons (MLP). Under such conditions the traditional backpropagation (BP) algorithm is trapped in local minima. To renew the search for a global minimum, we need to detect the traps and an offset scheme to avoid them. We have discovered that the gradient norm drops to a very low value in local...
Podaj zakres dat dla filtrowania wyświetlonych wyników. Możesz podać datę początkową, końcową lub obie daty. Daty możesz wpisać ręcznie lub wybrać za pomocą kalendarza.