The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Deep learning is a model of machine learning loosely based on our brain. Artificial neural network has been around since the 1950s, but recent advances in hardware like graphical processing units (GPU), software like cuDNN, TensorFlow, Torch, Caffe, Theano, Deeplearning4j, etc. and new training methods have made training artificial neural networks fast and easy. In this paper, we are comparing some...
In the field of computational fluid dynamics, the Navier-Stokes equations are often solved using an unstructured-grid approach to accommodate geometric complexity. Implicit solution methodologies for such spatial discretizations generally require frequent solution of large tightly-coupled systems of block-sparse linear equations. The multicolor point-implicit solver used in the current work typically...
In this paper finite element method for 3D DC resistivity modeling accelerated using multi-GPU (Graphics Processing Unit). Solution of the large system of linear equations is the most expensive computation in finite element method performed in GPUs to reduce the computational time. Conjugate gradient solver used to solve large system of linear equations. We developed kernel for conjugate gradient...
GPGPUs and other accelerators are becoming a mainstream asset for high-performance computing. Raising the programmability of such hardware is essential to enable users to discover, master and subsequently use accelerators in day-to-day simulations. Furthermore, tools for high-level programming of parallel architectures are becoming a great way to simplify the exploitation of such systems. For this...
Remote GPU execution has been proven to increase GPU occupancy and reduce job waiting time in multi-GPU batch-queue systems, by allowing jobs to utilize remote GPUs when there are not enough unoccupied local GPUs available. However, for GPU communication intensive applications, remote GPU communication overhead may account for more than 70% of the applications' execution times. The need for using...
In this paper, we design a frame to automatically select CPU or GPU environment and establish four general parallel foundation libraries. First, the system general foundation library and mathematical foundation library are built on basis of the research on GPU parallel characteristics, CUDA programming technology and the serial algorithm of forward and preserved amplitude evaluation. Then, according...
Block-structured adaptive mesh refinement (AMR) is a technique that can be used when solving partial differential equations to reduce the number of cells necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for...
In this contribution, a multi-graphic processing unit (GPU) implementation of Krylov sub-space methods with algebraic multi-grid preconditioners is proposed. It is used to solve large linear systems stemming from finite element or finite difference discretizations of elliptic problems as they occur, e.g., in electrostatics. The distribution of data across multiple GPUs and the effects on memory and...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.