The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Code compression technique is used for the reduction of codes to allow transportation of digital data from the transmitter (source) to the receiver (destination). These fixed length codes are converted into variable length codes having varied number of bits. The Huffman coding technique is found to be an optimal solution of transportation of data. It generally follows the Lossless Compression technique...
Low density parity check codes nowadays use in modern system due to their excellent performance. There are different message passing algorithm for LDPC codes to decode. In this paper, we have mention bit flip algorithm in brief and gave its bit error performance over AWGN channel via simulation result. For this communication, we use BPSK i.e. Binary phase shift keying digital modulation technique...
Paper proposes a digital design for the maximum power point tracker (MPPT) controller which is used for extracting maximum power from the photovoltaic (PV) system and supplying it to the batteries or grid under changing illumination and loads. The proposed digital controller circuit is very much distinguishing from the previous MPPT circuit. This circuit uses minimum power, processing time, and area...
Low-Density Parity check (LDPC) codes offer high-performance error correction near the Shannon limit which employs large code lengths and some iterations in the decoding process. The conventional decoding algorithm of LDPC is the Log Likelihood Ratio based Belief Propagation (LLR BP) which is also known as the ‘Sum-Product algorithm’ which gives the best decoding performance and requires the most...
In this paper we consider the Two Way Relay Channel (TWRC) where Noisy Network Coding (NNC) is performed. We propose an optimization scheme in terms of the sum rate by combining the Information Bottleneck (IB) method and the Bi-Section method. Then we compare the performance of the quantizer optimized by the proposed algorithm with other quantizers. We also compare and analyse the maximized sum rate...
Image Compression using Huffman coding technique is simpler & easiest compression technique. Compression of image is an important task as its implementation is easy and obtains less memory. The purpose of this paper is to analyse Huffman coding technique which is basically used to remove the redundant bits in data by analysing different characteristics or specification like Peak Signal to Noise...
This paper presents a low complexity Min-Sum algorithm for decoding irregular Low-Density Parity-Check (LDPC) codes. In the proposed algorithm, the significant improvement in the error correcting performance without enhancing the hardware complexity is achieved by employing adaptive and optimized normalization factors for both the extrinsic information and log-likelihood ratio data bits respectively...
Cryptography is a technique by which the stored and transferred data in a particular form can be comprehended and processed by only those, who it is intended for. In the modern era, cryptography is most often associated with the deception of the plaintext into a cipher text using a process called Encryption then back to the original plaintext using a process called Decryption. Strong cryptography...
Short Message Service (SMS) via cell phones is a widely used mode of data communication. Currently employed encoding schemes allow the transmission of 160 characters per SMS in English. This drops to 70 characters per SMS if any Indian language including Hindi is used, due to the UNICODE format used therein. Schemes proposed to improve the encoding efficiency of short text messaging generally encode...
At present, especially urgent task is ensuring a high level of the noise immunity of digital information channels (DIC) due to the constant tightening of requirements. For this purpose, the processing of the signal is carried out while preparing it for the transmission, generally using coding stages (concatenated coding) by block codes and continuous error-correcting codes including coded modulation...
At present, especially urgent task is ensuring a high level of the noise immunity of digital information channels (DIC) due to the constant tightening of requirements. For this purpose, the processing of the signal is carried out while preparing it for the transmission, generally using coding stages (concatenated coding) by block codes and continuous error-correcting codes including coded modulation...
Nowadays digital communication is very popular due to various advantages like less affected by noise and which can be easily regenerated by various decoding algorithms. In order to correct the error occurring while transmitting the message through communication channel, the forward error correction (FEC) coding is used. In this FEC method, the convolutional encoder provides a suitable mechanism to...
This paper presents a hybrid (lossless and lossy) technique for image vector quantization. The codebook is generated in two steps. 1. The training set is sorted based on the magnitudes of the training vectors. 2. From the sorted list, training vector from every nth position is selected to form the codevectors. Followed by that, centroid computation with clustering is done by repeated iterations to...
We present an iterative decoding algorithm for annihilating trapping sets in low-density parity-check codes. In addition to classic messages, subsets of variable nodes communicate directly. We show that by allowing variable nodes to collect information from a larger part of a graph, significant improvement can be achieved in the error-floor region, compared to the classic hard decision decoders. We...
Bi-directional optical flow (so-called BIO) is part of Joint Exploration Model (JEM) which explores potential coding efficiency improvement over state-of-the-art video codec. BIO allows fine motion compensation on a sample level without additional signaling, since refinement is explicitly calculated using just texture information from both reference frames under assumption the validity of optical...
Recent development and popularity of Flash Memory requires efficient error correction technique on its eco system like gaming and mobile platforms. In this paper, we have addressed an efficient method to decode and correct errors using the parallel computing technique offered by Graphical Processing Unit (GPU). This decoder employs the inversion-less Berleykamp-Massey algorithm (iBMA), and Chein search...
Reversibility of logic module has eminent application in low power CMOS design, quantum computing, nanotechnology and optical computing. On the other hand, configurability of PLDs (Programmable Logic Devices) reduces NRE (Nonrecurring engineering) cost and makes faster design process that offers customer a wide range of logic capacity, features, speed and voltage characteristics. In this paper, we...
Design and implementation of a Turbo decoder on FPGA is a challenging task. Various algorithms based on the BCJR algorithm have been proposed to enable the implementation of Turbo decoding in a hardware device. With the advent of FPGAs, the realization of the BCJR algorithm and different simplified versions of BCJR algorithm on hardware is possible. A VHDL implementation of Turbo decoder using the...
With the rapid increase of the network bandwidth, to process high throughput regular expressions with hardware has become inevitable. This paper presents a novel NFA-based algorithm. In this paper, two theorems were proved and were used to prove the correctness of the algorithm. Our approach was based on three basic modules to construct NFA that can be easily reused in a FPGA or ASIC. The quantitative...
This papers deals with an efficient image compression technique for images having low dynamic range. The images with low dynamic range generally have low intensity variations. By considering this fundamental characteristic into account we can go for image compression at higher ratio with small modifications to the existing block based EZW algorithm. To achieve the improvement in compression ratio,...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.