The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Deep convolutional neural networks (CNNs) have proven highly effective for visual recognition, where learning a universal representation from activations of convolutional layer plays a fundamental problem. In this paper, we present Fisher Vector encoding with Variational Auto-Encoder (FV-VAE), a novel deep architecture that quantizes the local activations of convolutional layer in a deep generative...
Modeling of high order interactional context, e.g., group interaction, lies in the central of collective/group activity recognition. However, most of the previous activity recognition methods do not offer a flexible and scalable scheme to handle the high order context modeling problem. To explicitly address this fundamental bottleneck, we propose a recurrent interactional context modeling scheme based...
A developer of mobile or desktop applications is responsible for implementing the network logic of his software. Nonetheless: i) Developers are not network specialists, while pressure for emphasis on the visible application parts places the network logic out of the coding focus. Moreover, computer networks undergo evolution at paces that developers may not follow. ii) From the network resource provider...
Sparse coding (SC) is a unsupervised learning scheme that has received an increasing amount of interests in recent years. However, conventional SC vectorizes the input images, which destructs the intrinsic spatial structures of the images. In this paper, we propose a novel graph regularized tensor sparse coding (GTSC) for image representation. GTSC preserves the local proximity of elementary structures...
The ability to proactively monitor business processes is one of the main differentiators for firms to remain competitive. Process execution logs generated by Process Aware Information Systems (PAIS) help to make various business process specific predictions. This enables a proactive situational awareness related to the execution of business processes. The goal of the approach proposed in the current...
DNA Computing, since its inception in 1994, has caught the eyes of researchers due to its massive parallelism and extremely high data density. These powers have given DNA Computer the ability to solve computationally "hard" problems using search over large search space, as well as a powerful data storage technique. This would be much more powerful and general purpose when its ability is...
In the classical approaches of model checking (explicit and symbolic), the tools encode the state space, as well as the transition relation between the states. This way, when the predecessor of a state is needed, as in the case of CTL formulas evaluation, it can be extracted from the state space in a straightforward manner. Practical experience indicates that renouncing at encoding the transition...
We extend Kawamura and Cook's framework for computational complexity for operators in analysis. This model is based on second-order complexity theory for functionals on the Baire space, which is lifted to metric spaces via representations. Time is measured in the length of the input encodings and the output precision.
In large-scale distributed computing clusters, such as Amazon EC2, there are several types of “system noise” that can result in major degradation of performance: system failures, bottlenecks due to limited communication bandwidth, latency due to straggler nodes, etc. On the other hand, these systems enjoy abundance of redundancy — a vast number of computing nodes and large storage capacity. There...
We consider the problem of computing the convolution of two long vectors using parallel processors in the presence of “stragglers”. Stragglers refer to the small fraction of faulty or slow processors that delays the entire computation in time-critical distributed systems. We first show that splitting the vectors into smaller pieces and using a linear code to encode these pieces provides improved resilience...
A technique of lossless compression via substring enumeration (CSE) is a well-known lossless compression algorithm for a one-dimensional (1D) source. The CSE uses a probabilistic model built from the circular string of an input source for encoding the source. The CSE is applicable to two-dimensional (2D) sources such as images by dealing with a line of pixels of 2D source as a symbol of an extended...
Motivated by the question of whether the recently introduced Reduced Cutset Coding (RCC) [1], [2] offers rate-complexity performance benefits over conventional context-based conditional lossless coding for sources with two-dimensional Markov structure, this paper compares several row-centric coding strategies that vary in the amount of conditioning as well as whether a model or an empirical table...
It was first observed by John Bell that quantum theory predicts correlations between measurement outcomes that lie beyond the explanatory power of local hidden variable theories. These correlations have traditionally been studied extensively in the probabilistic framework. A drawback of this perspective is that one is then forced to use in a single argument the outcomes of mutually-exclusive measurements...
Consider a distributed computing setup consisting of a master node and n worker nodes, each equipped with p cores, and a function f (x) = g(f1(x), f2(x),…, fk(x)), where each fi can be computed independently of the rest. Assuming that the worker computational times have exponential tails, what is the minimum possible time for computing f? Can we use coding theory principles to speed up this distributed...
Typical neuroimaging studies analyze associations between physiological or behavioral traits and brain structure or function. Some rely on predicting these scores from neuroimaging data. To explain association between brain features and multiple traits, reduced-rank regression (RRR) models are often used, such as canonical correlation analysis (CCA) and partial least squares (PLS). These methods estimate...
We present efficient coding schemes and distributed implementations of erasure coded linear system solvers. Erasure coded computations belong to the class of algorithmic fault tolerance schemes. They are based on augmenting an input dataset, executing the algorithm on the augmented dataset, and in the event of a fault, recovering the solution from the corresponding augmented solution. This process...
The challenges met during the software projects fall into any number of categories. The development and the technical solutions bring about technical challenges, but the situations one is confronted with, may also be sociological, psychological or managerial in nature. Without any knowledge in the field of social sciences, the programmers, testers and managers might interpret the social aspects of...
In the Evolutionary Computation field, it is frequent to assume that a computation load necessary for fitness value computation is, at least, similar for all possible cases. The main objective of this paper is to show that the above assumption is frequently false. Therefore, the examples of evolutionary methods that use problem encoding which allows for significant optimization of the fitness computation...
In this paper are offered vector-branching diagrams for modeling the TIF processes. Analyzed properties and disadvantages of TIF methods that make it impossible to use certain methods. To solve these problems invited using of subtractive-additive TIF method. Proved effectiveness of their application, by comparing the TIF methods, and the coefficient of efficiency for different bits capacity.
Many companies and institutions in their attempts construct decision-making system, face a bottleneck in performance of their systems. Training neural networks can take from several days to several weeks. The traditional approach suggests modification of modern systems and microcircuits as long as their performance reaches a permissible limit. A different approach, unconventional, looks for opportunities...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.