The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Many modern computing platforms—notably clouds and desktop grids—exhibit dynamic heterogeneity: the availability and computing power of their constituent resources can change unexpectedly and dynamically, even in the midst of a computation. We introduce a new quality metric, AREA, for schedules that execute computations having interdependent constituent chores (jobs, tasks, etc.) on such platforms...
A Distributed NetworkSystem (DNS) is a set of application and system programs, and data exchanges across a number of independent personal computers connected by a communication network. Task allocation in distributed network system is always a challenging task and also very helpful in order to enhance the performance of DNS. Although there are two types of approaches for task allocation and these...
To improve their performance, scientific applications often use loop scheduling algorithms as techniques for load balancing data parallel computations. Over the years, a number of dynamic loop scheduling (DLS) techniques have been developed. These techniques are based on probabilistic analyses, and are effective in addressing unpredictable load imbalances in the system arising from various sources,...
Recent progress in processing speeds, network bandwidths, and middleware technologies have contributed towards novel computing platforms, ranging from large-scale computing clusters to globally distributed systems. Consequently, most current computing systems possess different types of heterogeneous processing resources. Entering into the peta-scale computing era and beyond, reconfigurable processing...
In computational grid, effective scheduling of job and resource plays an essential role to optimize and enhance the quality of services provided to the service consumers by the service providers. In order to obtain this collaborative harmony, proper utilization and allocation of grid computing entities should be the imperative need of the hour. This paper primarily focuses on the optimal model for...
Sensor node processing in resource-aware sensor networks is often critically dependent on dynamic signal processing functionality — i.e., signal processing functionality in which computational structure must be dynamically assessed and adapted based on time-varying environmental conditions, operating constraints or application requirements. In dynamic signal processing systems, it is important to...
Recent advances in parallel and distributed computing have made it very challenging for programmers to reach the performance potential of current systems. In addition, recent advances in numerical algorithms and software optimizations have tremendously increased the number of alternatives for solving a problem, which further complicates the software tuning process. Indeed, no single algorithm can...
It is not uncommon that grid users observe highly variable performance when they submit similar workloads at different times. From the users' point of view, such inconsistent performance is undesirable, and it leads to user dissatisfaction and confusion. We tackle this performance inconsistency problem using overprovisioning which is increasing the system capacity by a factor that we call the overprovisioning...
Embedded systems increasingly include heterogeneous compute resources. Yet the vast majority of real-time scheduling methods are designed for single-resource or homogeneous multi-resource systems. Heterogeneity complicates scheduling; task execution time is resource-dependent. Furthermore, the best resource for one task may not necessarily be the best resource for all tasks, so one resource may not...
Future systems will have to support multiple and concurrent dynamic compute-intensive applications, while respecting real-time and energy consumption constraints. With the increase in the design complexity of MPSoC architectures that must support these constraints, flexible and accurate simulators become a necessity for exploring the vast design space solutions. In this paper, we present an asymmetric...
Grid users may experience inconsistent performance due to specific characteristics of grids, such as fluctuating workloads, high failure rates, and high resource heterogeneity. Although extensive research has been done in grids, providing consistent performance remains largely an unsolved problem. In this study we use overdimensioning, a simple but cost-ineffective solution, to solve the performance...
Coupled Multi-Physics simulations, such as hybrid CFD-MD simulations, represent an increasingly important class of scientific applications. Often the physical problems of interest demand the use of high-end computers, such as TeraGrid resources, which are often accessible only via batch-queues. Batch-queue systems are not developed to natively support the coordinated scheduling of jobs – which in...
In regards to applications like 3D seismic migration, it is quite important to improve the I/O performance within an cluster computing system. Such seismic data processing applications are the I/O intensive applications. For example, large 3D data volume cannot be hold totally in computer memories. Therefore the input data files have to be divided into many fine-grained chunks. Intermediate results...
During the last decade, using parallel and distributed system has been more general. In these systems a huge size of data or computation is distributed between many systems for getting better performance. Dividing data is one of the challenges in parallel and distributed system. One of proposed method for managing data distribution is called Divisible Load Theory (DLT). Ten reasons for using DLT have...
In this paper, a novel dynamic task scheduling algorithm is proposed for parallel applications modeled in Kahn process networks (KPN) running in a distributed multi-processor cluster. Static job scheduling algorithms do not work for the purpose for that the complexity of a KPN model remains unpredictable at compile time. Dynamic load balancing strategies ignore the explicit data dependences among...
In this paper, dynamic priority scheduling policy is integrated into on-chip communications to improve the communication efficiency in network-on-chips. This approach is more efficient than conventional first-in-first-out (FIFO) policy in the optimization of multimedia applications in real-time. Simulink-based experiments on the Motion-JPEG and H.264 decoding demonstrate the efficiency of our approach...
In this paper we give a theoretical model for determining the synchronization frequency that minimizes the parallel execution time of loops with uniform dependencies dynamically scheduled on heterogeneous systems. Using this model we determine the synchronization frequency that minimizes the estimated parallel time. The accuracy of our method is validated through experiments on a heterogeneous cluster...
The ever changing demands on computational resources has information systems managers looking for solutions that are more flexible. Using a ldquobigger boxrdquo that has more and faster processors and permanent storage or more random access memory (RAM) is not a viable solution as the system usage patterns vary. In order for a system to handle the peak load adequately, it will go underutilized much...
We study two parallel scheduling problems and their use in designing parallel algorithms. First, we define a novel scheduling problem; it is solved by repeated, rapid, approximate reschedulings. This leads to a first optimal PRAM algorithm for list ranking, which runs in logarithmic time. Our second scheduling result is for computing prefix sums of logn bit numbers. We give an optimal parallel algorithm...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.