The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Canonical distributed quantization schemes do not scale to large sensor networks due to the exponential decoder storage complexity that they entail. Prior efforts to tackle this issue have largely been limited to the suboptimal schemes of source grouping and decoding, thus failing to use all available information at the decoder. We propose a new decoding paradigm where all received bits are used in...
The H.264/AVC intra-frame codec is widely used to compress image/video data for applications like Digital Still Camera (DSC), Digital Video Camera (DVC), Television Studio Broadcast, and Surveillance video. Intra-prediction is one of the top 3 compute-intensive processing functions in the H.264/AVC baseline decoder and, therefore, consumes significant number of compute cycles a processor. In this...
Scalable video coding (SVC) is the extension of the H.264/AVC standard. Because of the variety of video transmission systems, storage systems and computing capabilities, video bit stream need to adapt the various needs of different terminals and network conditions, while SVC is a highly attractive solution to the problems above, it provides network-friendly scalability at a bit stream level with a...
We propose a new encoder-friendly image compression strategy for high-throughput cameras and other scenarios of resource-constrained encoders. The encoder performs ℓ∞-constrained predictive coding (DPCM coupled with uniform scalar quantizer), while the decoder solves an inverse problem of ℓ2 restoration of ℓ∞-coded images. Although designed for minimum encoder complexity, the new codec outperforms...
In this paper, the authors propose a variable length source code that achieves very good performance even for short sequences and has low encoding complexity, although it does not achieve the theoretical limit. Furthermore, the performance can be predicted for infinite codeword length with close form expressions, and has been proven to be good even when a suboptimum decoder with very low complexity...
Summary form only given. In low bit rate video coding, the frame rate of input sequence can be reduced to the half or even smaller portion by skipping or deleting frames before compression, and then the temporal resolution is restored via up-sampling at the decoder side. Numerous algorithms have been developed to address the problem of temporal resolution improvement. Actually, the quality of up-sampled...
This paper discusses pattern matching problem on Variable-to-Fixed-length codes (VF codes). A VF code is a coding scheme whose codeword lengths are fixed, and thus it is suitable for comprssed pattern matching. However, there are few reports showing its efficiency so far. We have investigated into the compression ratios and encoding/decoding speeds besides pattern matching performance on Tunstall...
Summary form only given. In this paper, Wyner-Ziv (WZ) video coding is a particular case of distributed video coding (DVC). Although some works, with improved performance, have been made in recent years, the coding efficiency of state-of-the-art WZ codec is still far from that of the state-of-the-art prediction-based codec, especially for high and complex motion contents. Moreover, most reported WZ...
We propose a method to improve traditional character-based PPM text compression algorithm for natural languages. Consider a text file as a sequence of alternating words and non-words, the basic idea of our algorithm is to encode non words and prefixes of words using character-based context models and encode suffixes of words using dictionary models. By using dictionary models, the algorithm can encode...
In this work we present an effective and computationally simple algorithm for image compression based on Hilbert Scanning of Embedded quadTrees (Hi-SET). It allows to represent an image as an embedded bitstream along a fractal function. Embedding is an important feature of modern image compression algorithms, in this way Salomon in [1, pg. 614] cite that another feature and perhaps a unique one is...
The String-to-Dictionary Matching Problem is defined, in which a string is searched for in all the possible concatenations of the elements of a given dictionary, with applications to compressed matching in variable to fixed length encodings, such as Tunstall's. An algorithm based on suffix trees is suggested and experiments on natural language text are presented suggesting that compressed search might...
The goal of the MobileASL (American Sign Language) research project is to enable sign language communication over the U.S. cellular network, which is low bandwidth and lossy. Data loss can greatly impact the quality of compressed video because of temporal and spatial error propagation. We investigate techniques to minimize the effect of data loss for improving compressed video sign language conversations...
In this paper, a simple, efficient, low power off-chip memory design is proposed, which fully exploits the features of DRAM memory and video application, as well as overcomes the drawbacks of algorithm complexity and system modification of embedded compression, which is a popular way to decrease power consumption of the off-chip memory. The integration of the scheme into video decoder will not involve...
We consider the two-way relay network where two nodes communicate via a relay. We assume that the data at the nodes are correlated (e.g., measurements in a sensor network) and that there is no direct communication between the nodes. The nodes communicate via the relay using a two-phase protocol consisting of an uplink part over an orthogonal multiple access channel and a downlink part over a broadcast...
Compressive sensing (CS) is recently and enthusiastically promoted as a joint sampling and compression approach. The advantages of CS over conventional signal compression techniques are architectural: the CS encoder is made signal independent and computationally inexpensive by shifting the bulk of system complexity to the decoder. While these properties of CS allow signal acquisition and communication...
Summary form only given. This paper proposes error recovery method for PPM. The proposed method is based on Static PPM, which has partial decoding capability, and divide the source data into blocks. Errors in the block are detected by checking the synchronization symbol which is inserted at the end of each block before compression. Erroneous block is recovered by parity block.
The application of distributed coding in many media applications, where low power and low-complexity encoder device is essential, is a new paradigm. In this paper, we propose a new pixel-domain Distributed Video Coding (DVC) scheme, in which both temporal and spatial correlations are exploited at the decoder. The Slepian-Wolf decoder is modified by introducing a source decoder to exploit Wyner-Ziv...
In the present paper, a software based effective ECG data compression algorithm is proposed. The whole algorithm is written in C- platform. The algorithm is tested on various ECG data of all the 12 leads taken from PTB Diagnostic ECG Database (PTB-DB). In this compression methodology, individual standard deviation of each part of the signal is calculated at first. To achieve a strict loss less compression...
In this paper, we propose a new compression artifact removable method for MPEG-2 moving pictures. We utilize the Total Variation (TV) regularization method, and obtain a texture component in which blocky noise and mosquito noise are decomposed. These artifacts are removed by using selective filters controlled by the edge information from the structure component and the motion vectors. The experimental...
The paper proposes a novel method to compress color images with imperceptible quality loss. The algorithm explores the difference in error perceptibility of human visual system for various areas. It is done by implementing different non-uniform quantizers for flat, detail and random blocks of pixels. These blocks are classified based on principle component analysis (PCA) and prediction error. For...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.