The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
With cloud computing, the efficient management of resources is of great importance as an increased utilization of the available resources can result in higher scalability and significant energy and cost reductions. Experimental validation of novel resource management strategies is costly and time consuming, and often requires in-depth knowledge of and control over the underlying cloud platform. As...
With cloud computing, efficient resource management is of great importance, as it has a direct impact on the scalability of the cloud application, and can result in significant energy and cost reductions. In recent years, a lot of research has been done regarding the management of cloud resources, resulting in multiple novel resource allocation strategies. Validation of these strategies however is...
The widening gap between the processor and memory performance has led to the inclusion of multiple levels of caches in the modern multi-core systems. Processors with simultaneous multithreading (SMT) support multiple hardware threads on the same physical core, which results in shared private caches. Any inefficiency in the cache hierarchy can negatively impact the system performance and motivates...
as the deceleration of processor scaling due to Moore's law accelerates research in new types of computing structures, the need arises for rethinking operating systems paradigms. Traditionally, an operating system is a layer between hardware and applications and its primary function is in managing hardware resources and providing a common abstraction to applications. How does this function apply,...
With the growing adoption of virtualized data centers and cloud services, nowadays multiple resource scheduling is increasingly attractive to researchers. Some previous studies achieved progresses in this area. However, these heuristics have obvious limitations in complex software defined cloud environment. A real multi-dimensional model is needed to solve this NP hard problem. Our approach emphasizes...
The Argo project is a DOE initiative for designing a modular operating system/runtime for the next generation of supercomputers. A key focus area in this project is power management, which is one of the main challenges on the path to exascale. In this paper, we discuss ideas for systemwide power management in the Argo project. We present a hierarchical and scalable approach to maintain a power bound...
In the last decade, OpenCL has sparked the interest of the computing world as it is a language based on an open standard that can run on many different heterogeneous platforms. This standard is continuously evolving to adapt to various use cases of different platforms. For example, with requests from the FPGA community, the pipe construct was added to the standard to facilitate the implementation...
We review strategies for applying statistical inference to system design and management. In design, inferred models act as surrogates for expensive simulators and enable qualitatively new studies. In management, inferred models predict outcomes from allocation and scheduling decisions, and identify conditions that make performance stragglers more likely.
Scientific computing on grid infrastructures has historically focused on processing vast workloads of independent single-core CPU jobs. Limitations of this approach, however, have motivated a shift towards parallel computing using message passing, multi-core CPUs and computational accelerators, including GPGPUs in particular. Application support for the use of GPGPUs in existing grid infrastructures...
Integrating FPGAs into clouds or data centers allows easy access to such reconfigurable resources and provides a promising opportunity to improve both performance and energy efficiency of such systems. Although currently the use of FPGAs as hardware accelerators and especially in clouds is mainly a topic of research, the integration of reconfigurable virtualized resources will become a task of growing...
This paper introduces a strategy to accelerate neighbor searching in agent-based simulations on GPU platforms. Because of their autonomous nature, agents can be processed by threads concurrently on GPU, and the overall simulation can be accelerated consequently. Each agent will simultaneously carry out a sense-think-act cycle in every time step. The neighbor searching is a crucial part in the sensing...
In a quest to improve system performance, embedded systems are today increasingly relying on heterogeneous platforms that combine different types of processing units such as CPUs, GPUs and FPGAs. However, having better hardware capability alone does not guarantee higher performance, how functionality is allocated onto the appropriate processing units strongly impacts the system performance as well...
The ever rising energy and accordingly cooling demands are a major hurdle for the scalability of today's supercomputers. We are challenged with the need to increase computation performance to cope with the rising complexity of calculations on the one hand and the need to keep the energy/cooling demand stable or in the best case even to reduce it. Recently, one widely discussed way to do this is the...
Cloud computing resource management for heterogeneous resources, data resources such as the flexibility of the model, this paper proposes a semi-structured resource description definition combines virtualization technology and resource interdependencies between constructs a cloud heterogeneous computing resource model, while the data center for the integration of hardware and software resources, so...
Standard high performance clusters (HPC) are being extensively used for solving computationally intensive problems in various scientific fields, one of which is the field of reservoir simulation. In our work, we expand the conventional HPC systems to run larger reservoir simulations on a heterogeneous grid of HPCs. This expansion is accomplished by developing a unique domain decomposition technique...
Future generation supercomputing clusters are endeavouring to achieve exascale performance without compromise on energy efficiency. Executing multiple applications simultaneously without space time sharing in a heterogeneous multi core environment brings out the utmost parallelism that exists within the applications. This helps to attain peak performance and also paves way for improved resource utilization...
In this work, the problem of design space exploration of soft real-time embedded systems is formulated as a stochastic simulation optimization problem. A novel multi-objective genetic algorithm is proposed to address this problem. In the proposed algorithm, design metrics such as price and size are optimized while deadline violations are minimized. Experimental results show the advantages of our approach...
'Simulating the Inter-Cloud' (SimIC) is a discrete event simulation toolkit based on the process oriented simulation package of SimJava. The SimIC aims of replicating an inter-cloud facility wherein multiple clouds collaborate with each other for distributing service requests with regards to the desired simulation setup. The package encompasses the fundamental entities of the inter-cloud meta-scheduling...
Exploiting computational resources within an organisation for more than their primary task offers great benefits -- making better use of capital expenditure and provides a pool of computational power. This can be achieved through the deployment of a cycle stealing distributed system, where tasks execute during the idle time on computers. However, if a task has not completed when a computer returns...
Accelerators such as graphics processing units (GPUs) provide an inexpensive way of improving the performance of cluster systems. In such an arrangement, the individual nodes of the cluster are directly connected to one or more accelerator devices via PCI Express. This results in a static mapping of accelerators onto compute nodes, where each accelerator can only be accessed from exactly one compute...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.