The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Ultrasound imaging is one of the safest technique of clinical diagnosis. The presence of speckle noise has made denoising of ultrasound images indispensable for the proper diagnosis of diseases. Non-Local similarity and low-rank approaches is an upcoming area of research in the field of image diagnosis. However, their advantages have not been exploited in the denoising of the ultrasound image. In...
The recent light field imaging technology has been attracting a lot of interests due to its potential applications in a large number of areas including Virtual Reality, Augmented Reality (VR/AR), Teleconferencing, and E-learning. Light Field (LF) data is able to provide rich visual information such as scene rendering with changes in depth of field, viewpoint, and focal length. However, Light Field...
Video coding has become widespread through mobile devices. At the same time, the adopted resolutions have been enlarged, demanding more coding efficiency and motivating the development of the new state-of-the-art standard, High Efficiency Video Coding (HEVC). However, to achieve the required efficiency the new standard greatly increased the computational intensity. That, allied to real-time constraints...
The Post-HEVC is the emerging video coding standard beyond the High Efficiency Video Coding (HEVC) standard. It is more complex in transformation and prediction steps but it offers the opportunity of 3D and 360° videos coding and compression. This paper presents different statistical analyzes of Post-HEVC encoded videos especially analysis on 1D and 2D transformation types and analysis on intra and...
Although there has been increasing demand for more reliable web applications, JavaScript bugs abound in web applications. In response to this issue, researchers have proposed automated fault detection tools, which statically analyze the web application code to find bugs. While useful, these tools either only target a limited set of bugs based on predefined rules, or they do not detect bugs caused...
We present a neural network model that learns to produce music scores directly from audio signals. Instead of employing commonplace processing steps, such as frequency transform front-ends, harmonicity and scale priors, or temporal pitch smoothing, we show that a neural network can learn such steps on its own when presented with the appropriate training data. We show how such a network can perform...
In video coding, the transform-quantization scheme has been widely used to remove perceptual redundancy. Typically, transform and quantization are carried out separately for the prediction residuals. However, since the residual blocks commonly exhibit diverse spectral characteristics, the quantization with predefined constant dead-zone under a given parameter will inevitably deteriorate the compression...
Texture classification has been extensively studied in computer vision. Recent research shows that the combination of Fisher vector (FV) encoding and convolutional neural network (CNN) provides significant improvement in texture classification over the previous feature representation methods. However, by truncating the CNN model at the last convolutional layer, the CNN-based FV descriptors would not...
Ste gano graphic systems are used for the transmission of hidden data in the original signal. The article describes the algorithm of the hidden data transmission using the speech signal as a carrier. The echo method is used for data embedding. In order to improve the decoding efficiency of embedded data, the procedure of voicing correction and mechanism of informed coding were developed and implemented...
We present our previous work in [17] in a more generic way to construct q-ary Golay complementary sets and near-complementary sets of size N and sequence length M · Nm by using different seed sequences, where m is an arbitrary non-negative integer, M is the length of seed sequences and N is a power of 2. The boolean functions of these sequences will also be derived with our method. To illustrate it,...
Recent research in computed tomographic imaging has focused on developing techniques that enable reduction of the X-ray radiation dose without loss of quality of the reconstructed images or volumes. While penalized weighted-least squares (PWLS) approaches have been popular for CT image reconstruction, their performance degrades for very low dose levels due to the inaccuracy of the underlying WLS statistical...
This paper presents two intra prediction algorithms in High Efficiency Video Coding (HEVC) encoder for reducing the computational complexity and increase the encoding speed. The first algorithm takes advantage of the high spatial correlation among neighboring pixels to substitute the reference samples as the first row or first and second rows of the current block to be predicted, while the pixels...
Recent work in video compression has shown that using multiple 2D transforms instead of a single transform in order to de-correlate residuals provides better compression efficiency. These transforms are tested competitively inside a video encoder and the optimal transform is selected based on the Rate Distortion Optimization (RDO) cost. However, one needs to encode a syntax to indicate the chosen...
Multiple transforms have received considerable attention recently, especially in the course of an exploration conducted by MPEG and ITU toward the standardization of the next generation video compression algorithm. This joint team has developed a software, called the Joint Exploration Model (JEM) which outperforms by over 25% the HEVC standard. The transform step in JEM consists in Adaptive Multiple...
Most existing binary embedding methods prefer compact binary codes (b-dimensional) to avoid high computational and memory cost of projecting high-dimensional visual features (d-dimensional, b
Numerous methods have been proposed for person re-identification, most of which however neglect the matching efficiency. Recently, several hashing based approaches have been developed to make re-identification more scalable for large-scale gallery sets. Despite their efficiency, these works ignore cross-camera variations, which severely deteriorate the final matching accuracy. To address the above...
Iris recognition offers unique pattern as a biometric authentication. Iris has advantages in term of universality, distinctiveness, permanence, and collectability. This research proposed implementation of half polar iris localization and normalization to improve performance of iris recognition that is detected using modified low cost camera. In the development phase dataset from CASIA-IrisV1 is used...
We consider the problem of encoding a finite set of vectors into a small number of bits while approximately retaining information on the angular distances between the vectors. By deriving improved variance bounds related to binary Gaussian circulant embeddings, we largely fix a gap in the proof of the best known fast binary embedding method. Our bounds also show that well-spreadness assumptions on...
We consider the problem of polar coding for transmission over a non-stationary sequence of independent binary-input memoryless symmetric (BMS) channels {Wi}∞i=1 where the i-th encoded bit is transmitted over Wi. We show, for the first time, a polar coding scheme that achieves the average symmetric capacity Ī({Wi}∞i=1) def= limN→∞ 1/N NΣi=1 I(Wi) assuming that the limit exists. The polar coding scheme...
We prove that the solvability of systems of linear equations and related linear algebraic properties are definable in a fragment of fixed-point logic with counting that only allows polylogarithmically many iterations of the fixed-point operators. This enables us to separate the descriptive complexity of solving linear equations from full fixed-point logic with counting by logical means. As an application...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.