The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this paper, a multi-agent distributed continuous-time algorithm is proposed to solve a large-scale linear algebraic equation Ax = do. Unlike many existing results assuming each agent knows a few rows of A, the algorithm proposed in this paper assumes each agent knows a few columns of A. To solve the linear algebraic equation, the problem is first converted to an optimization problem with a linear...
In this paper, a generalized convex network optimization problem with local domains and constraints is formulated and solved by using distributed multi-agent dynamics. Within the framework of the present network optimization, it is assumed that the local system states and constraints are available to an individual agent and each agent may only share information with its neighbors. A distributed PI-based...
Packing and layout problems belong to NP-Complete problems theoretically and they occur extensively in many engineering fields in practice. Artificial fish swarm algorithm (AFSA) is a newly proposed promising swarm intelligent optimization algorithm. Therefore we try to apply this novel intelligent algorithm to solving packing and layout problems. But there still exist some defects of this algorithm...
In this paper, we consider the privacy preserving problem of consensus protocol. First, we introduce a privacy preserving scheme, where each node produces and transmits a sequence of random values with their mean equaling to the node's initial state. We show that the network can reach average consensus with privacy preserving scheme, and provide a sufficient condition under which the initial state...
The generalization ability of learning algorithms is the focus of machine learning research, where the empirical risk minimization (ERM) plays an important role when the population distribution of observations is unknown. Most of the previous results are mainly based on computational learning theory, which is interested in how many samples are needed to make sure the estimated expected risk satisfies...
Comprehensive learning particle swarm optimization (CLPSO) algorithm has a good performance in overcoming premature convergence and avoiding getting stuck in local minima, which are shortcomings in particle swarm optimization. It can solve complex, multi-modal of single-objective problems, but it has not such performance in handling multi-objective optimization problems because of the difficulty of...
Glowworm Swarm Optimization Algorithm (GSO) is one of new swarm intelligence optimization algorithms in recent years. Its main idea comes from the cooperative behavior source among individuals during the process of courtship and foraging. In this paper, in order to improve convergence speed in the late iteration, avoid the algorithm falling into local optimum, and reduce isolated nodes, the Adaptive...
Many modern computer vision and machine learning applications rely on solving difficult optimization problems that involve non-differentiable objective functions and constraints. The alternating direction method of multipliers (ADMM) is a widely used approach to solve such problems. Relaxed ADMM is a generalization of ADMM that often achieves better performance, but its efficiency depends strongly...
Linear programming relaxations are central to MAP inference in discrete Markov Random Fields. The ability to properly solve the Lagrangian dual is a critical component of such methods. In this paper, we study the benefit of using Newton-type methods to solve the Lagrangian dual of a smooth version of the problem. We investigate their ability to achieve superior convergence behavior and to better handle...
The process optimization of train operation is a sophisticated multi-objective optimization problem, especially in the condition of timing constraints. Due to the fact that the multi-objective optimization model of train operation process with time constraint is difficult to be solved precisely in the limited optimization time, the process optimization of train operation with time constraint based...
This paper introduces alternating direction method of multipliers (ADMM) to decrease the computational workload of DOA estimation which uses the framework of compressive sensing, i.e., Basis Pursuit De-Noising (BPDN). BPDN transforms the DOA estimation problem to an optimization problem. And interior-point method (IPM) is traditionally used to solve this optimization problem. Though IPM can obtain...
The particle swarm optimization algorithm is improved by introducing the immune selection, adaptive propagation, multi-population evolution. An improved adaptive propagation chaotic particle swarm optimization algorithm based on immune selection (IS-APCPSO algorithm for short) is proposed in this paper. The performance of several algorithms has been compared by a classic example of traffic network...
Variable Projection (VarPro) is a framework to solve optimization problems efficiently by optimally eliminating a subset of the unknowns. It is in particular adapted for Separable Nonlinear Least Squares (SNLS) problems, a class of optimization problems including low-rank matrix factorization with missing data and affine bundle adjustment as instances. VarPro-based methods have received much attention...
We formulate an Alternating Direction Method of Multipliers (ADMM) that systematically distributes the computations of any technique for optimizing pairwise functions, including non-submodular potentials. Such discrete functions are very useful in segmentation and a breadth of other vision problems. Our method decomposes the problem into a large set of small sub-problems, each involving a sub-region...
We present a method for segmenting a one-dimensional piecewise polynomial signal corrupted by an additive noise. The method's principal part is based on sparse modeling, and its formulation as a reweighted convex optimization problem is solved numerically by proximal splitting. The method solves a sequence of weighted l21-minimization problems, where the weights used for the next iteration are computed...
We propose a novel parallel essentially cyclic asynchronous algorithm for the minimization of the sum of a smooth (nonconvex) function and a convex (nonsmooth) regularizer. The framework hinges on Successive Convex Approximation (SCA) techniques and on a new global model that describes many asynchronous environments in a more faithful and exhaustive way with respect to state-of-the-art models. A key...
This paper is concerned with a class of distributed nonsmooth convex constrained optimization problems with set constraints. The objective function is a sum of local convex functions, which are not necessarily differentiable. A new distributed continuous-time gradient-based algorithm using the decomposition design is explicitly constructed to solve the distributed optimization problem. Rigorous proofs...
The goal of Nonnegative Matrix Factorization (NMF) is to represent a large nonnegative matrix in an approximate way as a product of two significantly smaller nonnega- tive matrices. In comparison to other algorithms to calculate the NMF, Newton-type methods can be parallelized very well because Newton iterations can be performed in parallel without exchanging data between processes. However, these...
To improve the simulation accuracy of disease prediction model, a modified hybrid algorithm combining BP neural network (BPNN) with particle swarm optimization (PSO) algorithm based on chaos theory optimization is proposed considering BPNN is easy to fall into the local extremum. The chaos theory is used to optimize PSO algorithm to overcome the premature convergence of the traditional PSO algorithm...
Competitive swarm optimizer (CSO) has shown promising results for solving large scale global optimization problems proposed recently. However, CSO shows insufficient exploitation of the population. In this paper, a competitive swarm optimizer integrated with Cauchy and Gaussian mutation (CGCSO) is proposed for large scale optimization. The new algorithm does not only update the losers' positions with...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.