The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We propose a novel progressive Gaussian filter for nonlinear stochastic systems. A Gaussian approximation of the posterior is computed without an explicit assumption of a linear relation between the system state and the measurement. This allows for better quality of the estimation compared to Kalman filters for nonlinear problems like the EKF or UKF. In this work, we use the progressive filter framework,...
We consider stochastic nonlinear time-variant systems with imperfect state information in the context of model predictive control. The optimal control performance can only be achieved by closed-loop feedback policies, which in fact anticipate future behavior. However, the computation of these policies is in general not tractable due to the presence of the dual effect, i.e., the control actions not...
In this paper, we address control of Markov Jump Linear Systems without mode observation via dynamic output feedback. Because the optimal nonlinear control law for this problem is intractable, we assume a linear controller. Under this assumption, the control law computation can be expressed in terms of an optimization problem that involves Bilinear Matrix Inequalities. Alternatively, it is possible...
We consider closed-loop feedback (CLF) stochastic model predictive control of nonlinear time-invariant systems with imperfect state information. In this class of control problems, future information feedback is considered in the decision making process, and thus, the effect of the control influencing the state uncertainty is taken into account. The main challenge in the solution is to find a good...
In this work, the problem of pole identification of discrete-time single-input single-output (SISO) linear time-invariant (LTI) systems directly from input-output data is considered. The solution to this nonlinear estimation problem is derived in form of the general Bayesian estimation framework, as well as a practical approximate solution by application of statistical linearization. The derived direct...
In this work, we derive a distance measure for the detection of changes in the behavior of linear dynamic single-input-single-output (SISO) systems based on input-output data. The distance is calculated as a function of the system poles, which are directly estimated from the given data. Poles represent a system as a set and have no identities, which is analogous to the nature of association-free multi-target...
Increasing demand for Nonlinear Model Predictive Control with the ability to handle highly noise-corrupted systems has recently given rise to stochastic control approaches. Besides providing high-quality results within a noisy environment, these approaches have one problem in common, namely a high computational demand and, as a consequence, generally a short prediction horizon. In this paper, we propose...
The main problem of stochastic nonlinear model predictive control (SNMPC) is that the equations for state prediction and calculation of the expected reward are in general not solvable in closed form. A popular approach is to approximate the occurring continuous probability density functions by a discrete density representation, which allows an analytical solution of the SNMPC equations. In this paper,...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.