The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Worst-case time (WCET) analyses for single tasks are well established and their results ultimately serve the purpose of providing execution time parameters for schedulability analyses. Besides WCET analysis, an important problem is maximum blocking time (MBT) analysis which is essential in deferred preemption schedules for the selection of preemption points. Among the most pressing problems in this...
The density of stochastic simulation output provides more information on system performance than the mean alone. However, density estimation methods may require large sample sizes to achieve a certain accuracy or desired structural properties. A nonparametric estimation method based on exponential epi-splines has shown promise to overcome this difficulty by incorporating qualitative and quantitative...
In this paper, we discuss user space parameters and performance modeling of 3-D stencil computing on a stream-based FPGA accelerator. We use a heat conduction simulation as a benchmark and evaluate a performance for that developed with MaxCompiler, a kind of high-level synthesis tools for FPGAs, and MaxGenFD, a domain specific framework on the MaxCompiler for finite-difference equations. Performance...
A new simulation scheme of the Digital Spiking Silicon Neuron (DSSN) model is proposed. This scheme is based on the reconfigurable dataflow computing paradigm and targets the Maxeler MaxWorkstation. Compared to the previous implementation of the DSSN network, the new scheme has the virtues of better flexibility and better programmability. More importantly, computing with dataflow cores takes good...
Finite-element time-domain simulations of nonlinear eddy current problems require many solutions of large, sparse systems of equations. Modelorder reduction in connection with some suitable approximation to the nonlinearity is a powerful strategy for reducing the computational effort for such systems. This paper shows that the support-vector regression algorithm is a promising choice for the approximation.
To enhance the understanding of human perception and mimic it into an artificial system, several types of graphical models have been proposed that emulate the functionality of neurons in biological neural networks. In this work, we investigate the discriminatory power of two such probabilistic models of vision: a multivariate Gaussian model [1] and a restricted Boltzmann machine [2], both widely used...
The paper deals with aspects of mathematical modelling that is necessary for successful development of modelling and simulation technology. Several ongoing research efforts aim at improving automation of parallelisation for performance optimization of the Compute Unified Device Architecture (CUDA) software. An Application Processing Interface (API) allows the user to send commands to a graphics processing...
Nowadays computer applications are becoming heavier and require, at the same time, real-time results. The Heterogeneous clusters with their computing power represent a good solution to this request. However, it is possible that during the execution, a computing element of the cluster becomes defaulting, needs maintenance, or that the load needs to be rebalanced… In this paper, we propose a migration...
Based on L-2 Support Vector Machines(SVMs), Vapnik and Vashist introduced the concept of Learning Using Privileged Information(LUPI). This new paradigm takes into account the elements of human teaching during the process of machine learning. However, with the utilization of privileged information, the extended L-2 SVM model given by Vapnik and Vashist doubles the number of parameters used in the standard...
We present a query processing framework for the efficient evaluation of spatial filters on large numerical simulation datasets stored in a data-intensive cluster. Previously, filtering of large numerical simulations stored in scientific databases has been impractical owing to the immense data requirements. Rather, filtering is done during simulation or by loading snapshots into the aggregate memory...
We present a new approach to analytical performance modeling using Aspen, a domain specific langauge. Aspen (Abstract Scalable Performance Engineering Notation) fills an important gap in existing performance modeling techniques and is designed to enable rapid exploration of new algorithms and architectures. It includes a formal specification of an application's performance behavior and an abstract...
Gas distribution mapping is a crucial task in emission monitoring and search and rescue applications. A common assumption made by state-of-the art mapping algorithms is that only one type of gaseous substance is present in the environment. For real world applications, this assumption can become very restrictive. In this paper we present an algorithm that creates gas concentration maps in a scenario...
In this paper, we aim to check whether the nonparametric part in a partial linear regression model is parametric or not in the presence of right censored response. We propose a test statistics which can be considered as a kernel based smoothing estimator of some moment condition. We investigate our proposed test' asymptotic properties under the null and local alternative hypothesis. Simulation studies...
In cutting-edge CPU/GPU hybrid clusters, such as Tianhe-1A, the aggregate CPU computing capability may amount to up to 1/3 of the aggregate GPU computing capability. It thus goes without saying that the CPUs and GPUs should jointly carry out the computational work. However, to effectively and simultaneously use both the hardware components requires great care when developing the parallel implementations...
In this research, we develop a real-time fluid simulator, which uses smoothed particle hydrodynamics (SPH), for virtual environments including 3-dimensional fluid. SPH is a type of particle method and easy to control computational time by reducing or increasing the number of particles, however, it is difficult to change the number of particles while maintaining the same volume of fluid and the stability...
CARRX model measures financial volatility using range. To improve the forecasting ability of CARRX model, a new volatility forecasting method combining least squares support vector regression (LSSVR) with adaptive particle swarm optimization (APSO) is proposed to the traditional CARRX model. The non-parametric CARRX model is constructed by the LSSVR and APSO algorithm is designed to select the optimal...
Recently, the Graphic Processor Unit (GPU) has evolved into a highly parallel, multithreaded, many-core processor with tremendous computational horsepower and very high memory bandwidth. To improve the simulation efficiency of complex flow phenomena in the field of computational fluid dynamics, a CUDA-based simulation algorithm of large eddy simulation using multiple GPUs is proposed. Our implementation...
The paper addresses the issue of coherence of radial implicative fuzzy systems that use S-shaped fuzzy sets in antecedents of their rules. That means, in antecedents there are incorporated not only purely radial “bell shaped” fuzzy sets but also the S-shaped ones. We describe a computational model of these systems and introduce a sufficient condition that guarantees the existence of a consistent output...
We present a software package that supports teaching different parallel programming models in a computational science and engineering context. It implements a Finite Volume solver for the shallow water equations, with application to tsunami simulation in mind. The numerical model is kept simple, using patches of Cartesian grids as computational domain, which can be connected via ghost layers. The...
The bag-of-words (BoW) model treats images as an unordered set of local regions and represents them by visual word histograms. Implicitly, regions are assumed to be identically and independently distributed (iid), which is a poor assumption from a modeling perspective. We introduce non-iid models by treating the parameters of BoW models as latent variables which are integrated out, rendering all local...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.