The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Least-squares temporal difference learning (LSTD) has been used mainly for improving the data efficiency of the critic in actor-critic (AC). However, convergence analysis of the resulted algorithms is difficult when policy is changing. In this paper, a new AC method is proposed based on LSTD under discount criterion. The method comprises two components as the contribution: (1) LSTD works in an on-policy...
In this paper, an improved message passing detection algorithm based on probability approximation is proposed for large-scale MIMO systems. Large-scale MIMO has been identified as a key technology for the upcoming fifth generation (5G) wireless communication systems. Message passing detection (MPD) algorithm which exploits channel hardening theory can achieve very good performance in Large-scale MIMO...
The minimum mean square error (MMSE) algorithm can achieve the near-optimal detection performance in Massive MIMO system. However, it involves complicated matrix inversion. In this paper, a Chebyshev symmetrical successive over-relaxation iteration algorithm (CSSOR) based on the characteristic of channel hardening in massive MIMO system is proposed to avoid matrix inversion, whose the computational...
The work considers the mathematical aspects of one of the most fundamental problems of data analysis: search (choice) among a collection of objects for a subset of similar ones. In particular, the problem appears in connection with data editing and cleaning (removal of irrelevant (not similar) elements). We consider the model of this problem, i.e., the problem of searching for a subset of largest...
Created in the 90s of the past century methods of constructing algorithms (metaheuristic), inspired by the no free lunch theorem of Wolpert and Macready, using specific properties of problems, do not meet present expectations of practitioners. Commonly used artificial intelligence algorithms in recent years have also proved to be ineffective in solving a large group of extremely difficult instances...
Resampling is an essential step in particle filtering (PF) methods in order to avoid degeneracy. Systematic resampling is one of a number of resampling techniques commonly used due to some of its desirable properties such as ease of implementation and low computational complexity. However, it has a tendency of resampling very low weight particles especially when a large number of resampled particles...
Adaptive Dynamic Programming (ADP) with critic-actor architecture is a useful way to achieve online learning control. The algorithm Gaussian-Kernel Adaptive Dynamic Programming (GK-ADP) that has been developed before has a kind of two-phase iteration, which not only approximates value function, but also optimizes hyper-parameters simultaneously. However, just like most iteration algorithms are applied...
In this paper we propose a low-computational-complexity channel estimation algorithm for the downlink of massive MIMO systems. This algorithm employs the expectation propagation and expectation maximization techniques, assuming a sparsity promoting conditional Gaussian prior distribution on the channels. Its computational complexity scales only linearly with the number of transmit antennas, the number...
This paper discusses an application of randomized algorithms for matrix factorization to the classic Kalman filtering technique to estimate the state of a linear dynamical system. We consider the case when the state space is high dimensional leading to a high computational complexity in evaluating the state estimate and the estimation error covariance. We formalize two approaches based on the use...
In this paper, we propose some sparsity aware algorithms, namely the Recursive least-Squares for sparse systems (S-RLS) and l0-norm Recursive least-Squares (l0-RLS), in order to exploit the sparsity of an unknown system. The first algorithm, applies a discard function on the weight vector to disregard the coefficients close to zero during the update process. The second algorithm, employs the sparsity-promoting...
The closest string problem is a core problem in computational biology with applications in other fields like coding theory. Many algorithms exist to solve this problem, but due to its inherent high computational complexity (typically NP-hard), it can only be solved efficiently by restricting the search space to a specific range of parameters. Often, the run-time of these algorithms is exponential...
Given a graph G = (V, E) with non-negative edge lengths, a subset R ⊂ V, a Steiner tree for R in G is an acyclic subgraph of G interconnecting all vertices in R and a terminal Steiner tree is defined to be a Steiner tree in G with all the vertices of R as its leaves. A bottleneck edge of a Steiner tree is an edge with the largest length in the Steiner tree. The bottleneck Steiner tree problem (BSTP)...
Kernel independent component analysis (KICA) detects primary independent components of data by minimizing kernelized canonical correlation of random variables in a reproducing kernel Hilbert space. KICA has been widely used in many practical tasks, e.g., blind source separation and speech recognition. However, the dense kernel matrix in traditional KICA causes high computational complexity which prohibits...
For very large datasets, random projections (RP) have become the tool of choice for dimensionality reduction. This is due to the computational complexity of principal component analysis. However, the recent development of randomized principal component analysis (RPCA) has opened up the possibility of obtaining approximate principal components on very large datasets. In this paper, we compare the performance...
High reliability is required in networks, and it is important to build robust networks that are tolerant to network failures. In content delivery services in particular, service interruptions due to disconnection of communication paths between the server and nodes that receive the service must be avoided. Content delivery services use a master server that contains the original content and multiple...
The computational complexity of kernel methods grows at least quadratically with respect to the training size and hence low rank kernel approximation techniques are commonly used. One of the most popular approximations is constructed by sub-sampling the training data. In this paper, we present a sampling algorithm called Enhanced Distance Subset Approximation (EDSA) based on a novel kernel function...
A lot of research has been done on efficient implementation of RS code encoders and decoders as VLSI chips. However, none of them tries to seek a unified approach for solution to obtain VLSI hardware for RS encoders and decoders which can be easily configured for use across a large set of application areas with varying specifications. In this paper, a novel parallel RS decoding algorithm suitable...
In this paper, we derive two algorithms, namely the Simple Set-Membership Affine Projection (S-SM-AP) and the improved S-SM-AP (IS-SM-AP), in order to exploit the sparsity of an unknown system while focusing on having low computational complexity. To achieve this goal, the proposed algorithms apply a discard function on the weight vector to disregard the coefficients close to zero during the update...
Modularity is widely used in community detection in network. Despite its great success, one drawback is that its optimization is an NP-complete problem. To this end, this paper designs a new optimization method termed Global-Local-Search (Glo-Loc-Search for short). The basic idea lies in the two new important properties we have discovered which have seldom been noticed in the previous studies. The...
In this paper, the belief propagation (BP) based approximation methods which are introduced for low density parity check (LDPC) codes in literature are adapted to the Raptor decoder structure in order to reduce its computational complexity. The bit error rate (BER) performances of the algorithms over the additive white Gaussian noise (AWGN) channel are obtained by both theoretical works and simulations...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.