The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Ultrasound (US) data suffer from speckle noise as well as intensity inhomogeneities due to underlying changes in acoustic properties of tissue structure and/or the effects of acoustic focusing and attenuation. This paper describes a 2D and 3D variational level-set method for segmenting such data. To deal with the local statistics of speckle noise, the data term of the level-set energy function is...
When imaging the heart, using a 2D ultrasound probe, different views can manifest depending on the location and angulations of the probe. Some of these views have been labeled as standard views, due to the presentation and ease of assessment of key cardiac structures in them. We present an approach for automatic recognition and classification of these standard views, as a potential enabler for automated...
Cortical parcellation of the human brain typically serves as a basis for higher-level analyses such as connectivity analysis and investigation of brain network properties. Inferences drawn from such analyses can be significantly confounded if the brain parcels are inaccurate. In this paper, we propose a novel affinity matrix structure based on multiple kernel density estimation for cortical parcellation...
Due to the nature of fMRI acquisition protocols, slices in the plane of acquisition are not acquired simultaneously or sequentially, and therefore are temporally misaligned with each other. Slice timing correction (STC) is a critical preprocessing step that corrects for this misalignment. STC is applied in all major software packages. To date, little effort has gone towards assessing the optimal method...
Sparse estimation has received a lot of attention due to its broad applicability. In sparse channel estimat ion, the parameter vector with sparsity characteristic can be well estimated from noisy measurements through sparse adaptive filters. In previous studies, most works use the mean square error (MSE) based cost to develop sparse filters, which is rat ional under the assumption of Gaussian distributions...
In complex-valued signal processing, estimation algorithms require complete knowledge (or accurate estimation) of the second order statistics, this makes Gaussian processes (GP) well suited for modelling complex signals, as they are designed in terms of covariance functions. Dealing with bivariate signals using GPs require four covariance matrices, or equivalently, two complex matrices. We propose...
A recently proposed non-parameteric maximum likelihood (NPML) channel estimator shows superior performance to the least square (LS) estimator in presence of non-Gaussian noise. The derivation of the NPML estimator assumed perfect knowledge of the channel order, which, however, does not comply with most applications. In this paper, first we study the effects of the inaccurate order assumption on the...
We present new methods for pointwise spatially-adaptive filtering of anisotropic multivariable signals. It is assumed that the observations are given by a broad class of models with a signal-dependent variance. The proposed methods are based on the local quasi-likelihood, incorporating the directional-windowed local polynomial approximations (LPA) of the signal. The intersection of confidence intervals...
The massive volume of video and image data, compels them to be stored in a distributed file system. To process the data stored in the distributed file system, Google proposed a programming model named MapReduce. Existing methods of processing images held in such a distributed file system, requires whole image or a substantial portion of the image to be streamed every time a filter is applied. In this...
This paper proposes a reduced reference quality assessment model based on spiking neural network (SNN) in order to predict which image highlights perceptual noise in unbiased global illumination algorithms. These algorithms provide photo-realistic images by increasing the number of paths as proved by Monte Carlo theory. The objective is to find the number of paths that are required in order to ensure...
Image Deblurring algorithms have been evolving over many decades. Although there are many algorithms available today that produce reasonably good results, their speed of execution make them less appealing for many real time applications. Thus we address this issue by improving speed of execution of an existing algorithm using parallelization and other optimizations. The algorithm we selected was published...
In recent years emergence of many intelligent autonomous systems are possible due to the tremendous advancement of various technologies like computer vision and automation and control engineering with sensor technology. One such intelligent system is autonomous underwater vehicle (AUV) for ocean floor mapping by SONAR technology. Success of this autonomous smart and precise intelligent system depends...
This paper introduces an efficient approach towards blind deblurring of palm print images suffered from severe motion blur. First an improved Hough transform method is proposed to detect the blur angle and length of palm print image accurately. Analysis of blurred image is performed in Fourier domain which contains important information about the blur orientation of an image. After detecting the blur...
In this paper we propose a new anisotropic smoothing method that mimic the statistical noise distribution. First, we estimate the probability density function (pdf) of the noise data, then we incorporate the estimated pdf into a convolution formulate that, when expressed on mesh gives arise to an updating formulae that allows to reduce iteratively the noise. To preserve the edges and corners during...
We present a robust algorithm that registers one point set to another for nonrigid case. We formulate the problem as a Gaussian mixture model (GMM) density estimation by considering one of the point sets as the GMM centroids and the other as the data points generated by GMM. We displace the centroids and make them register to the data by maximizing the likelihood. To facilitate the process, we introduce...
The purpose of this functional magnetic resonance imaging (fMRI) study was to investigate the effects of smoothing kernel size and the extent of physiological noise correction on neuronal activity estimation. The fMRI data acquired from heavy smokers were used to evaluate the effect of preprocessing options. Three different smoothing kernel sizes (i.e., 4, 6, and 8 mm) were applied to compare neuronal...
The parameters plays an important role to the performance of support vector regression(SVR). In order to solve the problem of the Parameter optimization for SVR, first, we transform the problem of Parameter optimization into a problem of nonlinear system state estimation, then, we propose a novel algorithm based on Dual Recursive Variational Bayesian Adaptive Square-Cubature Kalman Filter (DRVB-ASCKF),...
This paper presents an automatic deblurring approach for motion blur images. The approach explores the prior of the intensity and gradient to estimate the motion blur kernel from single blurred image. In this way, motion blur kernel could be well estimated not only on daytime images, but also on nighttime images. Efficient optimization method was given for the prior-based approach. Besides, a cost-effective...
A conceptually simple hybrid Super Resolution (SR) algorithm is proposed using an adaptive edge sharpening algorithm. Most of the existing Super resolution algorithms are not robust to handle the high noisy conditions due to the ambiguity between the sharpening and denoising processes. The Low Resolution (LR) images are applied with the adaptive edge sharpening algorithm that is capable of capturing...
Enhancement of text information from the images captured by mobile camera is a very challenging task due to the high variation between the background and the foreground that contains shadows, poor contrast and non uniform illumination. In this paper, denoising along with binarization algorithm that uses phase congruency features is proposed to extract the text information from the document images...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.