The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In many robotics tasks involving impacts (e.g. grasping, hitting, kicking) the existence of complex interactions between the physics of the objects and of the robot makes it hard to create an analytical model of the interactions that can be used for prediction and planning. Exploration learning can enable a robot to autonomously learn such tasks and models simultaneously by trial and error. The Cost-Regularized...
Integrated CPU-GPU architecture provides excellent acceleration capabilities for data parallel applications on embedded platforms while meeting the size, weight and power (SWaP) requirements. However, sharing of main memory between CPU applications and GPU kernels can severely affect the execution of GPU kernels and diminish the performance gain provided by GPU. In the NVIDIA Tegra TK1 platform which...
Objectives: Electroencephalogram (EEG) plays an important role in recording the activity of human brain. Identification of epileptic seizures can be done using EEG signals. Methods/ Statistical Analysis: In this work for classification of EEG signals a method known as Empirical mode decomposition (EMD) is used and compared with empirical wavelet transform (EWT) based method. Findings: In this paper...
Information theoretic measures (e.g. the Kullback Liebler divergence and Shannon mutual information) have been used for exploring possibly nonlinear multivariate dependencies in high dimension. If these dependencies are assumed to follow a Markov factor graph model, this exploration process is called structure discovery. For discrete-valued samples, estimates of the information divergence over the...
We propose an eigenvalue shrinkage method with a modified Chebyshev polynomial approximation (CPA). The eigenvalue shrinkage has been used in many fields of signal and image processing. However, the shrinkage takes enormous computation time especially in the case that a matrix constructed from a signal or image becomes very large, i.e., eigendecomposition can hardly be performed. The CPA is an approximation...
Deep convolutional neural networks (CNN) have shown their good performances in many computer vision tasks. However, the high computational complexity of CNN involves a huge amount of data movements between the computational processor core and memory hierarchy which occupies the major of the power consumption. This paper presents Chain-NN, a novel energy-efficient 1D chain architecture for accelerating...
Estimation of bias field together with the tissue class of a noisy Magnetic Resonance image has been a challenging task because of the nonlinear nature of bias field. In order to address this issue we have proposed two new schemes. The first one is the recursive framework, where class labels and bias fields have been estimated simultaneously. In one part of the recursion, a variable variance Adaptive...
Convolutional neural network (CNN) finds applications in a variety of computer vision applications ranging from object recognition and detection to scene understanding owing to its exceptional accuracy. There exist different algorithms for CNNs computation. In this paper, we explore conventional convolution algorithm with a faster algorithm using Winograd's minimal filtering theory for efficient FPGA...
Convolution is a fundamental operation in many applications, such as computer vision, natural language processing, image processing, etc. Recent successes of convolutional neural networks in various deep learning applications put even higher demand on fast convolution. The high computation throughput and memory bandwidth of graphics processing units (GPUs) make GPUs a natural choice for accelerating...
Approximate computing aims to expose and exploit quality vs. efficiency tradeoffs to enable ever-more demanding applications on energy-constrained devices such as smartphones, or IoT devices. This paper makes the case for arbitrary quantization as a compelling approximation technique that exposes quality vs. energy tradeoffs and provides practical error guarantees. We present QAPPA (Quality Autotuner...
Multiple sclerosis (MS) is a neurological disorder which interrupts the communication between the brain and other parts of the body resulting in neurologic and physical and functional limitations. Gait deterioration is one of the most common problems and hence assessments of walking quality is a crucial part of MS diagnosis. In-clinic evaluations use physical examinations and an expanded disability...
For many intensive computing tasks, simultaneous data access into multi-dimensional data arrays is highly restricted by its data mapping strategy and memory port constraint. As such, to increase memory accessing bandwidth, innovative memory partitioning and mapping algorithms have been proposed to simultaneously access multiple memory blocks through physically distributing data elements in the same...
Does a hearing-impaired individual's speech reflect his hearing loss, and if it does, can the nature of hearing loss be inferred from his speech? To investigate these questions, at least four hours of speech data were recorded from each of 37 adult individuals, both male and female, belonging to four classes: 7 normal, and 30 severely-to-profoundly hearing impaired with high, medium or low speech...
This paper discusses the Correntropy Induced Metric (CIM) based Growing Neural Gas (GNG) architecture. CIM is a kernel method based similarity measurement from the information theoretic learning perspective, which quantifies the similarity between probability distributions of input and reference vectors. We apply CIM to find a maximum error region and node insert criterion, instead of euclidean distance...
Kernel density estimation is a popular method for identifying crime hotspots for the purpose of data-driven policing. However, computing a kernel density estimate is computationally intensive for large crime datasets, and the quality of the resulting estimate depends heavily on parameters that are difficult to set manually. Inspired by methods from image processing, we propose a novel way for performing...
Seaports play a vital role in the global economy, as they operate as the connection corridors to all other modes of transport and as engines of growth for the wider region. But ports today are faced with numerous unique challenges and for them to remain competitive, significant investments are required. In support of greater transparency in policy making, decisions regarding investment need to be...
Recently, researchers discovered a GPU has some advantages for non-graphic computing. CPU-GPU heterogeneous architecture combines CPU and GPU to a chip and makes GPU easier to run non-graphic programs. Researchers also proposed LLC(last-level cache) to store and exchange data between CPU and GPU. We discover the LLC hit rate has great influence on memory access performance and system's performance...
One of the central problems in machine learning and pattern recognition is how to deal with high-dimensional data either for visualization or for classification and clustering. Most of dimensionality reduction technologies, designed to cope with the curse of dimensionality, are based on Euclidean distance metric. In this work, we propose an unsupervised nonlinear dimensionality reduction method which...
Sufficient dimension reduction (SDR) is a popular framework for supervised dimension reduction, aiming at reducing the dimensionality of input data while information on output data is maximally maintained. On the other hand, in many recent supervised classification learning tasks, it is conceivable that the balance of samples in each class varies between the training and testing phases. Such a phenomenon,...
We present our experiences using cloud computing to support data-intensive analytics on satellite imagery for commercial applications. Drawing from our background in highperformance computing, we draw parallels between the early days of clustered computing systems and the current state of cloud computing and its potential to disrupt the HPC market. Using our own virtual file system layer on top of...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.