The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Keypoint detection and description in a continuous scale space achieves better robustness to scale change than those in a discretized scale space. State-of-the-art methods first decompose a continuous scale space into M + 1 component images weighted by M-order polynomials of scale σ, and then reconstruct the value at an arbitrary point in the scale space by a linear combination of the component images...
Exemplar-based methods have shown their potential in synthesizing novel but visually plausible contents for image super-resolution (SR), by using the implicit knowledge conveyed by the exemplar database. In practice, however, it is common that unwanted artifacts and low quality results are produced due to the using of inappropriate exemplars. How are the “right” exemplars defined and identified? This...
Single image super-resolution (SR) generates a high-resolution (HR) image by estimating the mapping function between image patches of different resolutions. By leveraging the notion of regression, the mapping function estimation task is often transformed into predicting mapping function's derivatives. Although higher-orders of derivative lead to a more accurate mapping function, current algorithms...
This paper presented a new method of lithological mapping using extended one-class kernel sparse representation, a new one-class classifier. In the proposed method, to address spectral variability of lithological types, learning vector quantization for novelty detection was adopted to produce several clusters before the classification process. The one-class kernel sparse representation was adopted...
In this paper, a fully automatic building reconstruction method for high resolution interferometric synthetic aperture radar (InSAR) data is presented. This method is based on stochastic geometrical model. Firstly, a building detection procedure is implemented on the big image and the entire scene is divided into building clips. After that, the reconstruction process is utilized for each building...
Image compression by the warped stretch transform is introduced, where the input image is reshaped by a signal-dependent mapping i.e. designed “warp kernel”. The warped image can be downsampled at a lower uniform rate for the same PSNR, effectively achieving reversible and context-aware non-uniform sampling.
The four V's in Big data sets, Volume, Velocity, Variety, and Veracity, provides challenges in many different aspects of real-time systems. Out of these areas securing big data sets, reduction in processing time and communication bandwidth are of utmost importance. In this paper we adopt Compressive Sensing (CS) based framework to address all three issues. We implement compressive Sensing using Deterministic...
In this paper, we propose data-driven optimization techniques for enhancing the ISAR image reconstruction process. We utilize keystone formatting to compensate for the range cell migration caused by the rotational motion and the well-established technique based on time-frequency signal analysis for the ISAR image reconstruction. The proposed optimization method for the keystone formatting determines...
Sparse coding models have been widely used to decompose monocular images into linear combinations of small numbers of basis vectors drawn from an overcomplete set. However, little work has examined sparse coding in the context of stereopsis. In this paper, we demonstrate that sparse coding facilitates better depth inference with sparse activations than comparable feed-forward networks of the same...
We address the problem of sampling and reconstruction of time-limited signals. Finite-energy, time-limited signals can be represented using time-limited orthogonal Fourier basis functions, and a finite linear combination can approximate a signal with the assumption that most of the signal energy is concentrated in a certain frequency band. The expansion coefficients in this approximation are uniform...
In this work, Gaussian Process Regression (GPR) based novel framework is proposed to super resolute the long range captured iris polar images. The framework uses linear kernel co-variance function in GPR during the process of super resolution of iris image, without external dataset. The new technique is proposed to reduce the time taken to super resolute the iris polar image patches. The framework...
Reconstructing missing areas of arbitrary shape and size is particularly important in error-prone communication as well as in applications where motion compensation is conducted such as multi-image super-resolution or framerate up-conversion. To that end, frequency selective extrapolation is an effective image reconstruction technique. This approach was originally designed for block losses and has...
Artificial awareness is an interesting way of realizing artificial intelligent perception for machines. Since the foreground object can provide more useful information for perception and informative description of the environment than background regions, the informative saliency characteristics of the foreground object can be treated as a important cue of the objectness property. Thus, a sparse reconstruction...
Super-resolution enhancement is a kind of promising approach to enhance the spatial resolution of images. To super-resolve a satisfying result, regularization term design and blur kernel estimation are two important aspects which need to be carefully considered. In this paper, we propose a robust regularized super-resolution reconstruction approach based on two sparsity properties to deal with these...
The image such as CT scan, x - ray image, CCTV videos and hand phone's camera is kind of low resolution image producers. Digital camera captured the continuous scenes and transform into discrete presentation in term of space and intensity. In sampling process it may create aliasing and information lost at frequency below the Nyquist sampling rates. Therefore the image suffered with an ill-posed problem...
ALMA is a revolutionary instrument in its scientific concept, its engineering design and its organisation as a global effort. ALMA and new incoming radio-telescopes delivery big amounts of data that are useful to the sky image reconstruction. In this context, MEM is one of the most recognized reconstruction algorithms in radio-interferometry and is based on a Bayesian approach. Our results show that...
Lost image areas with different size and arbitrary shape can occur in many scenarios such as error-prone communication, depth-based image rendering or motion compensated wavelet lifting. The goal of image reconstruction is to restore these lost image areas as close to the original as possible. Frequency selective extrapolation is a block-based method for efficiently reconstructing lost areas in images...
Quantitative microscopy (QM) became a key tool in systems-level drug discovery and disease diagnosis such as cancers and neurodegenerative disorders. However, to date QM is limited to epifluorescence microscopy which requires chemical labels, special imaging modality and often causes phototoxicity. Differential Interference Contrast (DIC) microscopy is label free and is low-phototoxic, thus it has...
In this paper, we study how to initialize the convolutional neural network (CNN) model for training on a small dataset. Specially, we try to extract discriminative filters from the pre-trained model for a target task. On the basis of relative entropy and linear reconstruction, two methods, Minimum Entropy Loss (MEL) and Minimum Reconstruction Error (MRE), are proposed. The CNN models initialized by...
Three-Dimensional Hahn moments are performant tool in the domain of image processing applications and pattern classification. In this work, we propose a new method for computing the Three-Dimensional Hahn moments. This method is based on matrix multiplication and symmetry property to decrease the complexity and computational time for volumetric image reconstruction. Experimental results showed that...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.