The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Compressed Sensing(CS) based video coding techniques are low-complexity in general, but with marginal compression results. Also, wireless multimedia devices have serious resource constraints, with fluctuations in bandwidth(for up-link traffic) and available power. In this paper we propose a nested technique in which CS data is further compressed within the CS domain to give far better compression...
We propose a new method for low-complexity compression of multispectral images based on universal vector quantization. Our approach generalizes the recently developed theory of universal scalar quantization to vector quantization, and uses it in the context of distributed coding. We exploit the availability of side information on the decoder to reduce the encoding rate of a vector quantizer, applied...
The remarkable development in the field of information technology and the diversity of multimedia applications in recent years imply the development of more efficient image compression techniques to improve the data transmission and storage capacity. The recent researches showed that classical wavelets are not able to exploit optimally the geometric regularities along the contours and edges of objects...
Visual textures like grass, water etc. consist of dense and random variations in contrast that are perceptually indistinguishable by a human eye. Such textures are costly to encode using image and video codecs. For example, in the state-of-the-art compression standard High Efficiency Video Coding (HEVC), detailed textures typically show relatively strong blurring artifacts at low rates (high QPs)...
Our challenge is the design of a “universal” bit-efficient image compression approach. The prime goal is to allow reconstruction of images with high quality. In addition, we attempt to design the coder and decoder “universal”, such that MPEG-7-like low-and mid-level descriptors are an integral part of the coded representation. To this end, we introduce a sparse Mixture-of-Experts regression approach...
For lower storage costs, storage systems are increasingly transitioning to the use of erasure codes instead of replication. However, the increase in the amount of data to be read and transferred during recovery for an erasure-coded system results in the problem of high degraded read latency. We design a new parallel degraded read method, Collective Reconstruction Read, which aims to overcome the problem...
A recent video coding standard, called High Efficiency Video Coding (HEVC), adopts two in-loop filters for coding efficiency improvement where the in-loop filtering is done by a de-blocking filter (DF) followed by sample adaptive offset (SAO) filtering. The DF helps improve both coding efficiency and subjective quality without signaling any bit to decoder sides while SAO filtering corrects the quantization...
Barcode is one of the existing systems, very fast in scanning and more accurate as compared to other coding systems. Barcode is extensively used because the speed of scanning the barcode is very high as compared to manual data entry. 2D barcodes are developed to increase the capacity compared to 1D barcode. The challenge in the development of barcode is in its decoding. The algorithm should be able...
Currently, the data storage medium in digital form are widely applied in various fields, including in the medical world. The excessive size of the digital medical image digital poses problems in terms of storage and the time to transmit these images through the Internet. To overcome these problems, a digital image compression can be conducted. This procedure is conducted to maintain the image quality...
In this paper, we propose a method for distributed compressed video sensing (DCVS) based on dictionary learning. The proposed method divides the video sequences into group of pictures (GOP). Each GOP includes a key-frame following by a CS-frame. Compressed sensing (CS) is used to exploit spatial redundancy of frames. At the encoder side Key-frames are sampled using random projection methods. To acquire...
The coding performance of the normative encoder of the JPEG XT in profile is analyzed and the problem on the encoder is summarized in this paper. It is pointed out that there is a mismatch in the handling of the quantization error between the normative encoder and the standard decoder. To avoid this problem, an improved structure has been proposed with consideration of the mismatch. The experimental...
We consider the task of reconstructing target signals which are processed as sparse sources for a distributed compression scenario, where communication between the sources is prohibited, however, correlation of information among sources can be utilized at the decoder. We propose an efficient reconstruction algorithm with the aid of other given sources as multiple side information (SI) for such distributed...
Thanks to the increasing number of images stored in the cloud, external image redundancies can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel cloud-based image coding scheme. Unlike current state-of-the-art systems, our method relies on a data dimensionality reduction technique. A global compensation is associated to a locally-weighted...
This paper presents a simple hardware architecture for quadtree(QT) partitioning based fractal image decoder. The decoding process in fractal based compression technique is an iterative process and utilizes the parameters extracted during encoding for converging to a fixed point, that approximates the original image. The adaptive sized partitioning scheme provides details of various regions at different...
This paper presents a hybrid (lossless and lossy) technique for image vector quantization. The codebook is generated in two steps. 1. The training set is sorted based on the magnitudes of the training vectors. 2. From the sorted list, training vector from every nth position is selected to form the codevectors. Followed by that, centroid computation with clustering is done by repeated iterations to...
We address the problem of optimizing the decoding of JPEG-compressed images, employing an approach based on the “generalized” graph Laplacian, a higher-order generalization of the usual graph Laplacian. The optimal decoding problem is formulated as a non-smooth but convex problem over a graph, and solved via the alternating directions method of multipliers. While similar, graph-based optimized decoding...
While autoencoders have been used as an unsupervised machine learning technique for classification and dimensionality reduction of the input data, they are lossy in nature when used alone in data compression. In this work, we proposed an image coding scheme by using stacked autoencoders, where the reconstruction residuals were entropy-coded to achieve lossless compression. As a case study, we compressed...
This paper proposes a distributed compressive sensing (CS) scheme for robust image transmission over unknown or time-varying channels with highly correlated images at the decoder. A compressed thumbnail is first transmitted after digital forward error correction (FEC) and modulation to retrieve highly correlated images and generate a side information (SI) at the decoder. The current residual image...
This paper analyzes how transmission errors in the texture and depth map jointly affect the synthesized virtual view in 3-D video coding. In particular, we propose a framework that decouples the effects attributed to transmission errors in texture and depth to facilitate theoretical analysis. The synthesis distortion due to depth map errors is characterized in the frequency domain using a new approach...
The video captured by different visual sensor in a visual sensor network is first compressed using the block-based compressive sensing algorithm. All the videos are encoded independently at different sub-rates and transmitted to a host workstation for reconstruction. Then, the proposed multi-phase joint reconstruction framework is applied to improve the reconstruction of lower subrate videos. In this...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.