The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We introduce DeNT, a decentralized Newton-based tracking algorithm that solves and track the solution trajectory of continuously varying networked convex optimization problems. DeNT is derived from the prediction-correction methodology, by which the time-varying optimization problem is sampled at discrete time instances and then a sequence is generated via alternatively executing predictions on how...
This paper considers an optimization problem that components of the objective function are available at different nodes of a network and nodes are allowed to only exchange information with their neighbors. The decentralized alternating method of multipliers (DADMM) is a well-established iterative method for solving this category of problems; however, implementation of DADMM requires solving an optimization...
We study networked unconstrained convex optimization problems where the objective function changes continuously in time. We propose a decentralized algorithm (DePCoT) with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and gradient-based correction steps, while sampling the problem data at a constant sampling period h. Under suitable conditions and for...
We develop a framework for trajectory tracking in dynamic settings, where an autonomous system is charged with the task of remaining close to an object of interest whose position varies continuously in time. We model this scenario as a convex optimization problem with a time-varying objective function and propose an adaptive discrete-time sampling prediction-correction scheme to find and track the...
We consider unconstrained convex optimization problems with objective functions that vary continuously in time. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of 1/h. The prediction step is derived by analyzing the iso-residual dynamics of the optimality...
This paper considers convex optimization problems where nodes of a network have access to summands of a global objective function. Each of these local objectives is further assumed to be an average of a finite set of functions. The motivation for this setup is to solve large scale machine learning problems where elements of the training set are distributed to multiple computational elements. The decentralized...
Agents of a network have access to strongly convex local functions fi and attempt to minimize the aggregate function f(x) = Σi=1nfi(x) while relying on variable exchanges with neighboring nodes. Various methods to solve this distributed optimization problem exist but they all rely on first order information. This paper introduces Network Newton, a method that incorporates second order information...
We consider minimization of a sum of convex objective functions where the components of the objective are available at different nodes of a network and nodes are allowed to only communicate with their neighbors. The use of distributed subgradient or gradient methods is widespread but they often suffer from slow convergence since they rely on first order information, which leads to a large number of...
This paper adapts a recently developed regularized stochastic version of the Broyden, Fletcher, Goldfarb, and Shanno (BFGS) quasi-Newton method for the solution of support vector machine classification problems. The proposed method is shown to converge almost surely to the optimal classifier at a rate that is linear in expectation. Numerical results show that the proposed method exhibits a convergence...
RES, a regularized stochastic version of the Broyden–Fletcher–Goldfarb–Shanno (BFGS) quasi-Newton method, is proposed to solve strongly convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second-order...
A regularized stochastic version of the Broyden-Fletcher- Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve optimization problems with stochastic objectives that arise in large scale machine learning. Stochastic gradient descent is the currently preferred solution methodology but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional...
A stochastic implementation of the Davidon-Fletcher-Powell (DFP) quasi-Newton method to minimize dual functions of optimal resource allocation problems in wireless systems is introduced. While the use of dual stochastic gradient descent algorithms is widespread, they suffer from slow convergence rate. Application of second order methods, on the other hand, is impracticable because computation of dual...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.