The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Fog computing is a promising technology that enables users to perform time-sensitive IoT analytics at locations near the clients. Recent studies have shown the improved delivery performance of fog networks by comparing them with traditional cloud-based architectures. In this paper, we focus on further improving the delivery performance of a fog network. We first show that the performance of fog networks...
Recently, adoption of Flash based devices has become increasingly common in all forms of computing devices. Flash devices have started to become more economically viable for large storage installations like datacenters, where metrics like Total Cost of Ownership (TCO) are of paramount importance. Flash devices suffer from write amplification (WA), which, if unaccounted, can substantially increase...
We study the temporal clustering effects in the computer network traffic on the performance of a node with a given throughput limited by its outgoing channel capacity. The empirical data sets are exemplified by three different HTTP servers. We consider the inter-arrival times and the service times and evaluate the average system performance by the queuing system simulation. Our results indicate that...
Software Defined Networking is one of the most promising approaches to the deployment of future network infrastructures. The most of the Internet service providers have to deal with a number of configurations to a crescent amount of network devices. SDN is a paradigm that proposes the separation of data forwarding plane from the data control plane. OpenFlow is an standard protocol used in SDN for...
This paper presents a methodology and a tool for modeling and simulating job assignment and migrations in large scale cloud infrastructures consisting of hundreds of thousands of processing, storage and networking nodes. Each cloud node, whether a server, or a disk array or a network element can be modeled according to a generalized single node queuing model, with appropriate parameterization and...
Applications dealing with huge amounts of data suffer significant performance impacts when they are deployed on top of an hybrid platform (i.e. The extension of a local infrastructure with external cloud resources). More precisely, through a set of preliminary experiments we show that mechanisms which enable on demand extensions of current Distributed File Systems (DFSes) are required. These mechanisms...
The advent of cloud computing technology has made its presence effectively felt in various application areas such as business, industry, scientific, administrative, astronomy, high-energy physics, information, education etc. Its capability of meeting the various continuously changing demands in all fields makes it increasingly popular. In order to use cloud computing technology, the user has to make...
Given the dynamic nature of the cloud, resulting from mapping virtual to physical resources, changes in the usage pattern of resources, migration of virtual resources and the dynamic nature of the applications themselves, the bottleneck resource in a given application changes over time. Promptly identifying the bottleneck of cloud application and consequently taking corrective actions (e.g. admission...
Due to the popularity and importance of Parallel File Systems (PFSs) in modern High Performance Computing (HPC) centers, PFS designs and I/O optimizations are active research topics. However, the research process is often time-consuming and faces cost and complexity challenges in deploying experiments in real HPC systems. This paper describes PFSsim, a trace-driven simulator of distributed storage...
In this paper, we present an analytical framework for characterizing and optimizing the power-performance tradeoff in Software-as-a-Service (SaaS) cloud platforms. Our objectives are two-fold: (1) We maximize the operating profit when serving heterogeneous SaaS applications with unpredictable user requests, and (2) we minimize the power consumption when processing user requests. To achieve these objectives,...
We propose techniques for power budgeting in data centers, where a large power budget is allocated among the servers and the cooling units such that the aggregate performance of the entire center is maximized. Maximizing the performance for a given power budget automatically maximizes the energy efficiency. We first propose a method to partition the total power budget among the cooling and computing...
With cloud business growing, many companies are joining the market as cloud service providers. Most providers offer similar services with slightly different pricing models, and performance data remains scarce. This leaves cloud users with the puzzle of guessing what costs they will need to pay to run their legacy applications in a cloud environment. Cloud Guide is a tool suite that provides users...
With exascale computing on the horizon, the performance variability of I/O systems represents a key challenge in sustaining high performance. In many HPC applications, I/O is concurrently performed by all processes, which leads to I/O bursts. This causes resource contention and substantial variability of I/O performance, which significantly impacts the overall application performance and, most importantly,...
In this paper, we study the performance of solid-state drives that employ flash technology as storage medium. Our prime objective is to understand how the scheduling of the user-generated read and write commands and the read, write, and erase operations induced by the garbage-collection process affect the basic performance measures throughput and latency. We demonstrate that the most straightforward...
Resource over subscription brings the risk of resource overload. This paper proposes a mechanism to remediate overload without assuming there is always resource available for migration. A work value notion is introduced to compare importance of VMs, and the overload remediation problem is formulated as a variant of Removable Online Multi-Knapsack Problem. An algorithm is proposed to solve this optimization...
Chip multi-processors (CMPs) with increasing number of processor cores are now becoming widely available. To take advantage of many-core CMPs, applications must be parallelized. However, due to the nature of algorithm / programming model, some parts of the application would remain serial. According to Amdahl's law, the speedup of a parallel application is limited by the amount of serial execution...
Performance prediction has been intensively studied in the last decade, alongside the accelerated development of distributed systems. This paper focuses on a hybrid approach regarding model solving, combining two popular prediction techniques applied separately so far, analytical and simulation modeling, in order to benefit from the strengths of both. The input UML model with MARTE (Modeling and Analysis...
Studying the existence of product forms of performance models described with compositional techniques is of central importance since this may lead to particularly efficient solution methods. This paper considers a class of models in the stochastic process algebra PEPA which do not enjoy the exact product form solutions available in the literature. However, they can be interpreted as queueing networks...
In this paper, we propose simple performance models to predict the impact of consolidation on the storage I/O performance of virtualized applications. We use a measurement-based approach based on tools such as blktrace and tshark for storage workload characterization in a commercial virtualized solution, namely VMware ESX server. Our approach allows a distinct characterization of read/write performance...
In this paper, we describe implementations of PSE Park engines especially for batch function. PSE Park is a meta-PSE on Cloud to perform a PSE construction support for the scientific and technological simulation by using distributed machines. PSE Park is a framework that consists of five engines of Console, Core, PIPE Server, Manager and x4u. Meta-PSE is realized by the cooperation of these engines...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.