The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Optimization is important in neural networks to iteratively update weights for pattern classification. Existing optimization techniques suffer from suboptimal local minima and slow convergence rate. In this paper, stochastic diagonal Approximate Greatest Descent (SDAGD) algorithm is proposed to optimize neural network weights using multi-stage backpropagation manner. SDAGD is derived from the operation...
In the present work, a change detection technique in remotely sensed images (under the scarcity of labeled patterns) is proposed where an ensemble of semi-supervised classifiers is used, instead of using a single (weak) classifier. Iterative learning of multiple classifier system is carried out using the selected unlabeled patterns along with a few labeled patterns. Selection of unlabeled patterns...
Using the adequate number of Multilayer Perceptron input, hidden and output layers, the Cellular Neural Network dynamic behavior, when the system converges to a fixed-point, can be reproduced by a Multilayer Perceptron with restrictions. A Multilayer Perceptron can then be defined in order to act as a two neuron Cellular Neural Network and vice-versa. From this, we combine their properties in order...
In this paper we proposed a new algorithm for neural network training. This algorithm is developed from modification on Levenberg-Marquardt algorithm for MLP neural network learning. The proposed algorithm has good convergence. This method reduces the amount of oscillation in learning procedure. We named this algorithm as GK-LM Method. An example is given here to show usefulness of this method. Finally...
This paper investigates the performance of conjugate gradient algorithms with sliding-window approach for training multilayer perceptron (MLP). Online learning is implemented when the system under investigation is time varying or when it is not convenient to obtain a full history of offline data about the system variables. Sliding window framework is proposed to combine the robustness of offline learning...
The paper investigates the enhancement in various conjugate gradient training algorithms applied to a multilayer perceptron (MLP) neural network architecture. The paper investigates seven different conjugate gradient algorithms proposed by different researchers from 1952-2005, the classical batch back propagation, full-memory and memory-less BFGS (Broyden, Fletcher, Goldfarb and Shanno) algorithms...
Injecting weight noise during training has been proposed for almost two decades as a simple technique to improve fault tolerance and generalization of a multilayer perceptron (MLP). However, little has been done regarding their convergence behaviors. Therefore, we presents in this paper the convergence proofs of two of these algorithms for MLPs. One is based on combining injecting multiplicative weight...
In this paper, we present a model based on the Neural Network (NN) for classifying Arabic texts. We propose the use of Singular Value Decomposition (SVD) as a preprocessor of NN with the aim of further reducing data in terms of both size and dimensionality. Indeed, the use of SVD makes data more amenable to classification and the convergence training process faster. Specifically, the effectiveness...
This paper reports on an efficient algorithm for locating the `optimal' solutions for multi-objective optimization problems by combining a state-of-the-art optimizer with a fitness model-estimate. This hybrid framework is introduced to illustrate how to make sufficient use of an approximate model, which includes a `controlled' process and an `uncontrolled' process during the search. With the inclusion...
Multilayer feed-forward neural network is widely used based on minimization of an error function. Back propagation is a famous training method used in the multilayer networks but it often suffers from the problems of local minima and slow convergence. These problems take place due to the gradient behavior of mostly used sigmoid activation function (SAF). Weight update becomes zero when activation...
The back-propagation (BP) network is widely recognized as a powerful training tool of the multilayer neural networks (MLNNs). Usually it suffers from a slow convergence rate and often results in local minimums, since it applies the steepest descent method to update the network weights. A variety of related algorithms have been introduced to address that problem. Levenberg-Marquardt algorithm is one...
This work presents system identification using neural network approaches for modelling a laboratory based twin rotor multi-input multi-output system (TRMS). Here we focus on a memetic algorithm based approach for training the multilayer perceptron neural network (NN) applied to nonlinear system identification. In the proposed system identification scheme, we have exploited three global search methods...
A neural network training method for identification in bounded time of nonlinear systems is presented in this paper. A sliding mode surface drives the adalines, perceptrons and multilayer perceptrons so as to a new second order sliding mode is enforced for all time. This neural network-based sliding mode enforces an invariant differential manifold, with a time-varying feedback gain to give rise to...
This paper presents the optimization of one-hidden layer artificial neural network (ANN) design using evolutionary programming (EP) for predicting the energy output of a grid-connected photovoltaic system installed at Malaysian Energy Centre (PTM), Bangi, Malaysia. In this study, the architecture and training parameters of the multi-layer feedforward back-propagation ANN model had been optimized while...
Saturation conditions of the hidden layer neurons are a major cause of learning retardation in multilayer perceptrons (MLP). Under such conditions the traditional backpropagation (BP) algorithm is trapped in local minima. To renew the search for a global minimum, we need to detect the traps and an offset scheme to avoid them. We have discovered that the gradient norm drops to a very low value in local...
Many interesting problems in reinforcement learning (RL) are continuous and/or high dimensional, and in this instance, RL techniques require the use of function approximators for learning value functions and policies. Often, local linear models have been preferred over distributed nonlinear models for function approximation in RL. We suggest that one reason for the difficulties encountered when using...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.