The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Stencil computations are not well optimized by general-purpose production compilers and the increased use of multicore, manycore, and accelerator-based systems makes the optimization problem even more challenging. In this paper we present Snowflake, a Domain Specific Language (DSL) for stencils that uses a "micro-compiler" approach, i.e., small, focused, domain-specific code generators....
We focus on sorting, which is the building block of many machine learning algorithms, and propose a novel distributed sorting algorithm, named CodedTeraSort, which substantially improves the execution time of the TeraSort benchmark in Hadoop MapReduce. The key idea of CodedTeraSort is to impose structured redundancy in data, in order to enable in-network coding opportunities that overcome the data...
It is commonly the case that a small number of widely used applications make up a large fraction of the workload of HPC centers. Predicting the performance of important applications running on specific processors enables HPC centers to design best performing system configurations and to insure good performance for the most popular applications on new systems. In the analyses presented in this paper...
Data generated from modern scientific instrumentation have grown up to an unprecedented scale. Moreover, data formats and computational behaviors of scientific big data workloads are much more complex than those in Internet services. These two facts pose a serious challenge to scientific data management and analytics. Among many concerns, the first one is how to build a comprehensive and representative...
Big Data as expressed as "Big Graphs" are growing in importance. Looking forward, there is also increasing interest in streaming versions of the associated analytics. This paper develops an initial template for the relationship between "traditional" batch graph problems, and streaming forms. Variations of streaming problems are discussed, along with their relationship to existing...
Chapel is an emerging scalable, productive parallel programming language. In this work, we analyze Chapel's performance using The Parallel Research Kernels on two different manycore architectures including a state-of-the-art Intel Knights Landing processor. We discuss implementation techniques in Chapel and their relation to the OpenMP implementations of the PRK. We also suggest and prototype several...
The purpose of this study is to quantitatively assess the performance of graph processing algorithms for large scale-free graphs residing in byte-addressable Non-Volatile Memory (NVM). Our study focuses on static and dynamic graph algorithms previously optimized for external memory in the form of locally attached NAND Flash arrays, with data structures tuned to maximize locality. The evaluation is...
With the increase in the complexity and number of nodes in large-scale high performance computing (HPC) systems, the probability of applications experiencing failures has increased significantly. As the computational demands of applications that execute on HPC systems increase, projections indicate that applications executing on exascale-sized systems are likely to operate with a mean time between...
This paper seeks to address the disconnect between different stages of the FPGA CAD flow that often adversely affects the quality of results of the implemented designs. In particular, a machine-learning framework is presented, consisting of a suite of classification and regression techniques, to model the underlying relationship between the characteristics of circuits and the best CAD algorithm (and...
Determining key characteristics of High Performance Computing machines that allow users to predict their performance is an old and recurrent dream. This was, for example, the rationale behind the design of the LogP model that later evolved into many variants (LogGP, LogGPS, LoGPS, ) to cope with the evolution and complexity of network technology. Although the network has received a lot of attention,...
In this paper we investigate an emerging application, 3D scene understanding, likely to be significant in the mobile space in the near future. The goal of this exploration is to reduce execution time while meeting our quality of result objectives. In previous work, we showed for the first time that it is possible to map this application to power constrained embedded systems, highlighting that decision...
Scientists who want to exploit the computing power of the latest parallel architectures are faced with a diverse set of architectures and a number of programming languages, models and approaches. Among several such programming techniques are directive-based programming models, OpenMP and OpenACC. This paper explores the similarities and the functionality gaps between both models and presents insights...
Hardware accelerators have become a de-facto standard to achieve high performance on current supercomputers and there are indications that this trend will increase in the future. Modern accelerators feature high-bandwidth memory next to the computing cores. For example, the Intel Knights Landing (KNL) processor is equipped with 16 GB of high-bandwidth memory (HBM) that works together with conventional...
We explore the use of synthetic benchmarks for the training phase of machine-learning-based automatic performance tuning. We focus on the problem of predicting if the use of local memory on a GPU is beneficial for caching a single target array in a GPU kernel. We show that the use of only 13 real benchmarks leads to poor prediction accuracy (about to 58%) of the 13 leave-one-out models trained using...
We present a novel strategy for automatic performance tuning of GPU computational kernels. The strategy combines heuristic search with regression trees to prune the optimization space. It samples configurations in the space and uses these samples to build a regression tree. It then focuses the search on the leaf region of the tree with the best mean sample performance. Additional configurations are...
Equipped with the Chinese home-grown SW26010 many-core processor, TaihuLight claims the top place in the TOP500 list released in June 2016. Although some large-scale applications have been successfully running on the supercomputer, few studies have been conducted to analyze the performance impact caused by the extreme memory-bound architecture design. To facilitate native in-depth optimizations and...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.