The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We have enabled work migration in the CoMD proxy application to study dynamic load imbalance. Proxy applications are developed to simplify studying parallel performance of scientific simulations and to test potential solutions for performance problems. However, proxy applications are typically too simple to allow work migration or to represent the load imbalance of their parent applications. To study...
We study the online load balancing problem for two independent criteria in heterogeneous systems. For convenience, we choose a system of distributed file servers located in a cluster as the scenario, although our work is not limited to it. Every server is assigned upper bounds for its load and storage space. We assume that the heterogeneity of servers is eventually reflected by the difference of these...
A distributed system consists of several autonomous nodes. In a distributed system some of the nodes may be overloaded due to a large number of job arrivals while other nodes may remain idle without any processing. The performance of a distributed system depends crucially on dividing up work effectively among the computing nodes. So a way is needed to share load across all the computing nodes. In...
In the field of scientific computing, load balancing is a major issue that determines the performance of parallel applications. Nowadays, simulations of real-life problems are becoming more and more complex, involving numerous coupled codes, representing different models. In this context, reaching high performance can be a great challenge. In this paper, we present graph partitioning techniques, called...
Nowadays, cloud computing is considered as an internet evolution and will be the support for future internet development. In this paper, we abstract the load balancing problem in cloud computing as a model that a few of users occupying the computing resources, and introduce the price variation into the model. We formulate this problem as a cooperative game among job processing nodes. Processors work...
OpenFOAM is a widely used opensource CFD application. Based on mesh partitioned, applications can run in parallel to achieve better performance in OpenFOAM. When mesh generated from the liquid field is large, performance of partitioning algorithms will heavily affect the execution efficiency of the whole application. In this paper, we investigate the four partitioning algorithms implemented in OpenFOAM-Simple,...
Performance of applications executed on large parallel systems suffer due to load imbalance. Load balancing is required to scale such applications to large systems. However, performing load balancing incurs a cost which may not be known a priori. In addition, application characteristics may change due to its dynamic nature and the parallel system used for execution. As a result, deciding when to balance...
Partitioning plays an important role in PDES performance due to the high communication cost in parallel platforms and the fine-granularity of most simulation models. Traditionally, models are partitioned by deriving the static communication graph of objects and applying graph partitioning to reduce the mincut while load balancing the number of objects. However, many, if not all, models exhibit great...
Preventing and controlling outbreaks of infectious diseases such as pandemic influenza is a top public health priority. EpiSimdemics is an implementation of a scalable parallel algorithm to simulate the spread of contagion, including disease, fear and information, in large (108 individuals), realistic social contact networks using individual-based models. It also has a rich language for describing...
In computational grid, effective scheduling of job and resource plays an essential role to optimize and enhance the quality of services provided to the service consumers by the service providers. In order to obtain this collaborative harmony, proper utilization and allocation of grid computing entities should be the imperative need of the hour. This paper primarily focuses on the optimal model for...
There are many data and computation intensive applications that generally require very high performance and a lot of computing resources which lead to the increase in the overall execution time. Parallel computing can improve overall execution time which involves breaking up large program into smaller pieces that can be executing in multi processors system. While, distributed computing offers some...
In this paper, we present two techniques for inter- and intra-node data partitioning aimed at load balancing MPI applications on heterogeneous multicore platforms. For load balancing between the multicore nodes of a heterogeneous multicore cluster, we propose how to define a functional performance model of an individual multicore node as a single computing unit, and use these models for data partitioning...
Network model partitioning is a key component of distributed network simulations. Simulations slow down considerably due to inequitable load balancing and heavy inter-host communication leading to unbounded synchronization overhead. Also, regularly refreshing the node partition is necessary due to to the dynamic nature of simulation load and event generation. In this paper, we propose a distributed...
The parallel external memory (PEM) model has been used as a basis for the design and analysis of a wide range of algorithms for private-cache multi-core architectures. As a tool for developing geometric algorithms in this model, a parallel version of the I/O-efficient distribution sweeping framework was introduced recently, and a number of algorithms for problems on axis-aligned objects were obtained...
Grid computing is the combination of computer resources from multiple administrative domains for a common goal. Grid computing (or the use of a computational grid) is applying the resources of many computers in a network to a single problem at the same time — usually to solve a scientific or technical problem that requires a great number of computer processing cycles or access to large amounts of...
In this paper, we propose a decentralized parallel computation model for global optimization using interval analysis. The model is adaptive to any number of processors and there is no need to design an initial decomposition scheme to feed each processor at the beginning. The work load is distributed evenly among all processors by alternative message passing. Numerical experiments indicate that the...
A general parallel direct simulation Monte Carlo method may result in a strongly unbalanced distribution of work, which leading to very low speedups. In this paper, load balancing between processors is achieved based on an improved adaptive decomposition technique. The method has been implemented on a cluster, using Master-Slave architecture to minimize the communications. Applications were made for...
Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose...
We re-examine the problem of load balancing in conservatively synchronized parallel, discrete- event simulations executed on high-performance computing clusters, focusing on simulations where computational and messaging load tend to be spatially clustered. Such domains are frequently characterized by the presence of geographic "hot-spots'' - regions that generate significantly more simulation...
In this research, four static load balancing algorithms: round robin, randomized, central manager, and threshold, are simulated and they performances are compared. The simulation is performed using discrete event simulator. Load indices used for central manager and threshold algorithm are CPU, memory, and hard disk I/O. The simulation of the four algorithms is done against three types of programs...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.