Serwis Infona wykorzystuje pliki cookies (ciasteczka). Są to wartości tekstowe, zapamiętywane przez przeglądarkę na urządzeniu użytkownika. Nasz serwis ma dostęp do tych wartości oraz wykorzystuje je do zapamiętania danych dotyczących użytkownika, takich jak np. ustawienia (typu widok ekranu, wybór języka interfejsu), zapamiętanie zalogowania. Korzystanie z serwisu Infona oznacza zgodę na zapis informacji i ich wykorzystanie dla celów korzytania z serwisu. Więcej informacji można znaleźć w Polityce prywatności oraz Regulaminie serwisu. Zamknięcie tego okienka potwierdza zapoznanie się z informacją o plikach cookies, akceptację polityki prywatności i regulaminu oraz sposobu wykorzystywania plików cookies w serwisie. Możesz zmienić ustawienia obsługi cookies w swojej przeglądarce.
To address the unconstrained optimization problem, the Conjugate Gradient Method (CG) uses the sequence of iterations to approach the minimum point of aim function. Because of the effect of rounding errors, many merits of CG are no longer in existence in practical use. Hence the rate of convergence is not ideal and a practical problem confronting us is how to improve conjugate gradient iteration so...
A secant/finite difference algorithm based on trust region strategy is presented. This algorithm is designed to solve the minimax optimization problem of a finite number of functions, whose Hessian matrices are normally sparse. By integrating a secant method and a finite difference method, and adopting a symmetrically consistent partition of the columns of the Hessian matrices, the algorithm can employ...
We propose a new trust region method that employs both the modified BFGS update and Amijio line search. The method exploits the information of function and gradient, and ensures the Hessian matrix of trust region subproblem positive-definite. At some assumptions, the global convergence and superlinear convergence property are proposed. Finally, numerical experiments show that the method is efficient.
The radial basis function (RBF) is well known dynamic recursion neural network. However, RBF weights and thresholds, which are trained by back propagation algorithm, the gradient descent method and genetic algorithm, will be fixed after the training completing. The adaptive ability is bad. To improve RBF identification performance, particle swarm optimization (PSO), which is a stochastic search algorithm,...
A distributed online learning framework for support vector machines (SVMs) is presented and analyzed. First, the generic binary classification problem is decomposed into multiple relaxed subproblems. Then, each of them is solved iteratively through parallel update algorithms with minimal communication overhead. This computation can be performed by individual processing units, such as separate computers...
In practice, the convergence rate and stability of perturbation based extremum-seeking (ES) schemes can be very sensitive to the curvature of the plant map. This sensitivity arises from the use of a gradient descent adaptation algorithm. Such ES schemes may need to be conservatively tuned in order to maintain stability over a wide range of operating conditions, resulting in slower optimisation than...
This paper introduces a modified PSO, gradient particle swarm optimizer (GPSO), for geometric constraint solving. GPSO combines the merits of global search of the PSO and the sheer convergence capacity of gradient algorithm, the most prominent iterative method for linear equations. GPSO uses PSO to search the area where the best solution may exist in the whole space, and then performs fine searching...
The image registration is the process to overlay two or more images taking at different times. The image registration is a very important step in medical imaging as well as in remote sensing to quantify changes occurring over times. Most image registration algorithms measures the distances between two images and try to find the transformation that these distance is minimum. However, the distance function...
In this paper we develop a new dual decomposition method for optimizing a sum of convex objective functions corresponding to multiple agents but with coupled constraints. In our method we define a smooth Lagrangian, by using a smoothing technique developed by Nesterov, which preserves separability of the problem. With this approach we propose a new decomposition method (the proximal center method)...
We consider a network of sensors deployed to sense a spatial field for the purposes of parameter estimation. Each sensor makes a sequence of measurements that is corrupted by noise. The estimation problem is to determine the value of a parameter that minimizes a cost that is a function of the measurements and the unknown parameter. The cost function is such that it can be written as the sum of functions...
Podaj zakres dat dla filtrowania wyświetlonych wyników. Możesz podać datę początkową, końcową lub obie daty. Daty możesz wpisać ręcznie lub wybrać za pomocą kalendarza.