The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The present paper describes an algorithmic technique to speed up weight convergence in neural networks on-line training. Standard pattern backpropagation is modified to train the neural network over a time window of samples and not one sample only, so that a faster weight convergence may be achieved. The use of such training technique is explained in an adaptive control task and problems related to...
Image segmentation is critical to image processing and pattern recognition, An image segmentation system is proposed for the segmentation of color image based on neural networks. First, we introduce BP Neural network, it has the capacity of parallel computing, distributed saving, self-studying, fault-to-learnt and nonlinear function approximating. So it widely used in image segmentation, but it also...
The radial basis function (RBF) is well known dynamic recursion neural network. However, RBF weights and thresholds, which are trained by back propagation algorithm, the gradient descent method and genetic algorithm, will be fixed after the training completing. The adaptive ability is bad. To improve RBF identification performance, particle swarm optimization (PSO), which is a stochastic search algorithm,...
In a fully complex-valued feed-forward network, the convergence of the complex-valued back-propagation learning algorithm depends on the choice of the activation function, minimization criterion, initial weights and the learning rate. The minimization criteria used in the existing learning algorithms do not approximate the phase well in complex-valued function approximation problems. This aspect is...
In this paper, a new efficient fast terminal attractor based backpropagation learning algorithm for feedforward neural networks is proposed, which improves the convergence speed. The effectiveness of the proposed algorithm in improving learning speed is shown by the simulation results including a sensor network example.
Back-propagation neural networks with Gaussian function synapses have better convergence property over those with linear-multiplying synapses. In digital simulation, more computing time is spent on Gaussian function evaluation. We present a compact analog synapse cell which is not biased in the subthreshold region for fully-parallel operation. This cell can approximate a Gaussian function with accuracy...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.