The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
BP feed-forward network is the most widely applied neural network. There are a number of algorithms currently. The respective strengths and weaknesses of 8 kinds of BP algorithm provided by the neural network toolbox in MATLAB are studied in the paper in order to choose a more appropriate and faster algorithms under different conditions. Based on this, the measurement of vacuum level with the method...
A new particle swarm optimization algorithm with dynamically changing inertia weight and threshold value based on improved adaptive particle swarm optimization is proposed, in which the inertia weight of the particle is adjusted adaptively based on the premature convergence degree of the swarm and the fitness of the particle. The diversity of inertia weight makes a compromise between the global convergence...
The 2010 PhysioNet Challenge was to predict the last few seconds of a physiological waveform given its previous history and M-1 different concurrent physiological recordings. A robust approach was implemented by using a bank of adaptive filters to predict the desired channel. In all, M channels (the M-1 original signals, and 1 signal derived from the previous history of the target signal) were used...
This paper explores application of support vector regression to adaptive inverse control problems. Support vector regression (SVR) has been proven to generate global solutions contrary to neural networks, because SVR basically solves quadratic programming (QP) problems. With this advantage, a plant model is identified and its inverse model is learned. In addition, adaptive algorithms for compensating...
This paper suggests to use a block MAP-LMS (BMAP-LMS) adaptive filter instead of an adaptive filter called MAP-LMS for estimating the sparse channels. Moreover to faster convergence than MAP-LMS, this block-based adaptive filter enables us to use a compressed sensing version of it which exploits the sparsity of the channel outputs to reduce the sampling rate of the received signal and to alleviate...
In this paper, a squared penalty term is added to the conventional error function to improve the generalization of neural networks. A weight boundedness theorem and two convergence theorems are proved for the gradient learning algorithm with penalty when it is used for training a two-layer feedforward neural network. To illustrate above theoretical findings, numerical experiments are conducted based...
We discuss the role of random basis function approximators in modeling and control. We analyze the published work on random basis function approximators and demonstrate that their favorable error rate of convergence O(1/n) is guaranteed only with very substantial computational resources. We also discuss implications of our analysis for applications of neural networks in modeling and control.
In this paper we present a new continuous-time recurrent neurofuzzy network structure for modeling and identification of a class of nonlinear systems, using a training algorithm motivated from previous works in adaptive observers. Using only output measurements and the knowledge of an excitation input signal, the proposed network is trained by generating estimates of an ideal network and jointly identifying...
In conventional RBF Network structure, different layers perform different tasks. Hence, it is useful to split the optimization process of hidden layer and output layer of the network accordingly. This study proposes hybrid learning of RBF Network with Particle Swarm Optimization (PSO) for better convergence, error rates and classification results. The hybrid learning of RBF Network involves two phases...
This paper presents a comparison of results obtained from neural network training by backpropagation and particle swarm optimization (PSO) algorithms. The neural network model has been developed for field strength prediction in indoor environments. It has been already shown for neural networks as powerful tool in RF propagation prediction. It is very important to choose proper algorithm for training...
Based on analyzing fundamental algorithms principle of back propagation network, in view of the limitations of BP network algorithm, this paper proposed the homologous improved algorithms from two respects: accelerate the learning speed of BP network and advance the convergence degree of network. By means of increasing the discrepancy factors of neuron function, the traditional function is optimized...
When applying traditional methods to train approximately linear support vector machine (SVM), we will get a kernel matrix which occupy mass computer memory and lead a slow convergence speed. In order to improve the convergence speed of SVM, a method of training approximately linear support vector machine based on variational inequality (VIALSVM) was proposed. The method turns the convex quadratic...
This paper proposes a wavelet neural networks (WNN) with self-adaptive learning rate. The algorithm can automatically change the learning rate with operational parameter, but without any artificial adjustments. Thus it once for ado overcomes the drawbacks of WNN, i. e. slow convergence, inability to determine the value of learning rate and easiness to fall into local minimum point. The results of...
Dynamic multi-objective optimization (DMO) is one of the most challenging class of optimization problems where the objective functions change over time and the optimization algorithm is required to identify the corresponding Pareto optimal solutions with minimal time lag. DMO has received very little attention in the past and none of the existing multi-objective algorithms perform satisfactorily on...
The Complex Least Mean Square algorithm (Complex LMS) has been widely used in various adaptive filtering applications, e.g. in the wireless communications and biomedical fields, due to its computational simplicity. However, the main drawback of the Complex LMS algorithm is its slow convergence. In addition, the performance is dependent on the choice of the convergence factor or learning rate. In this...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.