The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The Lamb wave approach in Structural Health Monitoring (SHM) mainly cares the defects detection. It relies on the analysis of signals for physical propagation. In this paper, a sparse representation approach for modeling the signal scattering in discretized monitoring area is proposed. The overcomplete dictionary can be determined explicitly by the Lamb wave propagation theory with columns reflecting...
In this paper, we consider a method to solve the device-free localization (DFL) problem that is able to detect spatial obstruction via wireless network. A dictionary learning approach with difference of convex (DC) programming and DC algorithm is proposed to indicate target location based on learning data. By measuring the variation in the received signal strength of the wireless links indicating...
An efficient algorithm for overcomplete dictionary learning with lp-norm as sparsity constraint to achieve sparse representation from a set of known signals is presented in this paper. The special importance of the ¿p-norm (0 < ρ < 1) has been recognized in recent studies on sparse modeling, which can lead a stronger sparsity-promoting solutions than the l1-norm. The lp-norm, however, leads...
Sparse representation has been proven to be a powerful tool for analysis and processing of signals and images. Whereas the most existing sparse representation methods are based on the synthesis model, this paper addresses sparse representation with the so-called analysis model. The ℓ1/2-norm regularizer theory in compressive sensing (CS) shows that the ℓ1/2-norm regularizer can yield stronger sparsity-promoting...
In analysis dictionary learning, the learned dictionary may contain similar atoms, leading to a degenerate dictionary. To address this problem, we propose a novel incoherent analysis dictionary learning algorithm with the ℓ1-norm for sparsity and simultaneously with the coherence penalty. The whole problem is convex but nonsmooth due to the sparsity regularizer and the coherence penalty. Hence, the...
Dictionary learning often use ℓp-norm (p < 2) for sparsity. In this paper, we use a different structure of regularizer, i.e. mixed ℓ1,2-norm regularizer for group sparsity. We propose a method based on a decomposition scheme and alternating optimization that can turn the whole problem into a set of subminimizations of univariate functions, each of which is dependent on only one dictionary atom...
In this paper, we propose an overcomplete, nonnegative dictionary learning method for sparse representation of signals, which is based on the nonnegative matrix factorization (NMF) with 1/2-norm as the sparsity constraint. By introducing the 1/2-norm as the sparsity constraint into NMF, we show that the problem can be cast as sequential optimization problems of quadratic functions and quartic functions...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.