The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We focus on sorting, which is the building block of many machine learning algorithms, and propose a novel distributed sorting algorithm, named CodedTeraSort, which substantially improves the execution time of the TeraSort benchmark in Hadoop MapReduce. The key idea of CodedTeraSort is to impose structured redundancy in data, in order to enable in-network coding opportunities that overcome the data...
Parallel programming models for distributed memory systems rely on communication operations. Their behavior is essential for the performance of parallel application codes. Hence, these operations have an impact on the runtime of the parallel application and require an efficient usage in order to minimize their overhead. One-sided communication paradigms offer low communication latencies. However,...
In this work we propose an accelerated stochastic learning system for very large-scale applications. Acceleration is achieved by mapping the training algorithm onto massively parallel processors: we demonstrate a parallel, asynchronous GPU implementation of the widely used stochastic coordinate descent/ascent algorithm that can provide up to 35× speed-up over a sequential CPU implementation. In order...
RNNLM (Recurrent Neural Network Language Model) can save the historical information of the training dataset by the last hidden layer and can also as input for training. It has become an interesting topic in the field of Natural Language Processing research. However, the immense training time overhead is a big problem. The large output layer, hidden layer, last hidden layer and the connections among...
A novel approach is presented to teach the parallel and distributed computing concepts of synchronization and remote memory access. The single program multiple data (SPMD) partitioned global address space (PGAS) model presented in this paper uses a procedural programming language appealing to undergraduate students. We propose that the amusing nature of the approach may engender creativity and interest...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.