The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
A lack of energy proportionality, low resource utilization, and interference in virtualized infrastructure make the cloud a challenging target environment for improving energy efficiency. In this paper we present OptiBook, a system that improves energy proportionality and/or resource utilization to optimize performance and energy efficiency. OptiBook shares servers between latency-sensitive services...
Long-tail latency of web-facing applications continues to be a serious problem. Most of the previously published research addresses two classes of long latency problems: uneven workloads such as web search, and resource saturation in single nodes. We describe an experimental study of a third class of long tail latency problemsthat are specific to distributed systems: Cross-Tier Queue Overflow (CTQO)...
In multi-tier cloud service systems, performance evaluation relies on numerous experiments in order to collect key metrics such as resources usage. The approach may result in highly time-consuming in practice. In this paper, we propose an automated framework for performance tracking, data management and analysis to minimize human intervention in multi-tier cloud service systems. The framework support...
This paper presents how the throughput of a server is influenced by the applying a vertical scalability. The paper studies the results obtained in measuring the response time of the server and the processing time of the server when dealing with a large number of requests by modifying the configuration of the machine, increasing the number of cores the machine has and increasing the RAM capacity. This...
The orientation towards web service technology gives a raise to huge number of functionally similar web services. Hence, quality of service is becoming more important and even essential in web service technology as a distinctive criterion. Generally providers have sufficient information about the really provided quality, however, clients don't. Service level agreements that define the relationship...
AppScale provides an easy way to distribute applications using the Google App Engine SDK on different infrastructure platforms, e.g. in a private cloud. In this paper, we provide a performance evaluation comparing a benchmark application hosted at the original Google App Engine (GAE) and by means of AppScale at Amazon EC2, at Google Compute Engine (GCE) and on a private on-premise cluster. The benchmark...
We address the problem of analyzing the mean delay experienced by end-users in a Content Distribution Network (CDN) consisting of a set of surrogate (cache) servers connected via persistent TCP connections over the Internet cloud to a set of remotely located origin servers. We first consider the simplest scenario of a single cache server and a single origin server. Taking into account several factors...
TCP and UDP are two important protocols in the network transport layer. In this paper, we study the effects of TCP and UDP on the performance of the application. Based on OPNET simulation software, a simple FTP application simulation model was created and the experiment was carried out. In order to test the influences of different transport protocols on the performance of application in different...
This paper focuses on the design and analysis of scheduling policies for multi-class queues, such as those found in wireless networks and high-speed switches. In this context, we study the response-time tail under generalized max-weight policies in settings where the traffic flows are highly asymmetric. Specifically, we consider a setting where a bursty flow, modeled using heavy-tailed statistics,...
This paper surveys the literature to reveal the communication patterns associated with cloud computing. The literature includes a number of studies that although are not specific to cloud computing do provide insight into one class of communication for cloud computing: the communication between cloud clients (CCs) and cloud service provider (CSP) facilities. In addition, a few studies focus on the...
Public cloud infrastructures provide flexible hosting for web application providers, but the rented virtual machines (VMs) often offer unpredictable performance to the deployed applications. Understanding cloud performance is challenging for application providers, as clouds provide limited information that would help them have expectations about their application performance. In this paper we present...
Quality of service is the key indicator for service oriented architectures, because it directly expresses the operability and computational nature of the system. As such, we propose a quality evaluation framework for multi service multi functional hierarchical SOAP based web service. The overall interoperable quality is evaluated through load testing using Mercury Load Runner with Apache Tomcat web...
Cloud Computing is a major area of research. Cost and Load balancing has become an important QoS parameter. Load Balancing directly affects the Reliability, Response Time, Throughput and Energy Efficiency of a Server. A good Load Balanced architecture implies minimized overall time, less server failure, minimized response time, increased throughput and less wastage of energy. Such architecture also...
In spite of the fact that Cloud Computing Environments (CCE) host many I/O intensive applications such as Web services, big data and virtual desktops, virtual machine monitors like Xen impose high overhead on CCEs' delivered performance hosting such applications. Studies have shown that hypervisors such as Xen favor compute intensive workloads while their performance for I/O intensive tasks is far...
The objective of maintaining a high efficiency for a shared storage system often has to be compromised with the enforcement of Service-level Agreement (SLA) on quality of service (QoS). From the perspective of I/O scheduling, I/O request service order optimized for disk efficiency can be substantially different from the order required for meeting QoS requirements. When QoS takes priority, the storage...
The era of cloud based multimedia applications has lead to a huge increase in the no. of requests on cloud. The increased no. of requests on cloud leads to an increased workload, making workload balancing an important QoS Parameter. Workload Balancing also leads to a judicious use of resources like electricity etc. and thus promotes the concept of Green IT. The paper presents a new Load Balanced Resource...
Cognitive networks rapidly proliferating into all aspects of computing and communication. Some of them are specially designed for the people with specific ability. However, very few are designed with assistance of the people with disability and elderly who need help during networking. The goal of this project is to study network accessibility issues and their impact in such cognitive network design...
Multi-tiered transactional web applications are frequently used in enterprise based systems. Due to their inherent distributed nature, pre-deployment testing for high-availability and varying concurrency are important for post-deployment performance. Accurate performance modeling of such applications can help estimate values for future deployment variations as well as validate experimental results...
At the onset of the widespread usage of social networking services in the Web 2.0/3.0 era, leveraging a distributed and scalable caching layer like Memcached is often invaluable to application server performance. Since a majority of the existing clusters today are equipped with modern high speed interconnects such as InfiniBand, that offer high bandwidth and low latency communication, there is potential...
This paper presents a methodology and a tool for modeling and simulating job assignment and migrations in large scale cloud infrastructures consisting of hundreds of thousands of processing, storage and networking nodes. Each cloud node, whether a server, or a disk array or a network element can be modeled according to a generalized single node queuing model, with appropriate parameterization and...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.