The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This paper presents an evaluation of an experiment conducted on a server to analyse how horizontal scalability affects its performance. This paper studies the results obtained by measuring response times and processing times when dealing with many requests by adding more machines to the system. This paper presents the technologies used to build this system of machines as well as the results obtained...
This paper presents how the throughput of a server is influenced by the applying a vertical scalability. The paper studies the results obtained in measuring the response time of the server and the processing time of the server when dealing with a large number of requests by modifying the configuration of the machine, increasing the number of cores the machine has and increasing the RAM capacity. This...
AppScale provides an easy way to distribute applications using the Google App Engine SDK on different infrastructure platforms, e.g. in a private cloud. In this paper, we provide a performance evaluation comparing a benchmark application hosted at the original Google App Engine (GAE) and by means of AppScale at Amazon EC2, at Google Compute Engine (GCE) and on a private on-premise cluster. The benchmark...
Load Balancers (LBs) play a critical role in managing the performance and resource utilization of distributed systems. However, developing efficient LBs for large, distributed clusters is challenging for several reasons: (i) large clusters require numerous scheduling decisions per second, (ii) such clusters typically consist of heterogeneous servers that widely differ in their computing power, and...
An interactive .NET web application is designed for use in remote monitoring of sensor response within a Local Area Network zone. Key performance metrics of the web application such as response times, throughput, processor and disk utilization are measured by employing a standard testing tool. The impact of concurrent users' activities on the performance metrics of the web application have been observed...
This paper studies the performance of a distributed system. In particular, our work focuses on the study and comparison of job scheduling techniques in a cluster that could be part of a computational Grid. We examine two different job allocation policies, one static and one dynamic, and three job scheduling policies combined together. The performance of different scheduling schemes is compared over...
Media traffic has become the major traffic of the Internet and will keep on increasing. Numerous storage services, media applications and devices have emerged to provide Internet enabled content storage and delivery capabilities. Content Delivery Networks (CDN) based on cloud storage, adopting distributed storage technology in the edge servers, enable efficient data storage and retrieval system that...
Scalability is one of the many challenges in designing and operating SaaS-type applications, which demands dynamic generation, composition, deployment, and monitoring of applications. A fundamental problem that confronts such applications is the efficient approach to evaluate the performance of the applications and resource utilization before they actually deployed. In this paper, S-BM, a benchmark...
Cloud service providers (CSPs) aim to preserve infinite scalable and elastic computing resources. In addition, user requirements for flexible cloud computing resources impact the overall performance of the cloud. From the other side, virtualization adds additional layer in cloud stack and therefore decreases the service performance compared to on-premise traditional hosting. It seems that the price...
Performance unpredictability is one of the major concerns slowing down the migration of mission-critical applications into cloud computing infrastructures. An example of non-intuitive result is the measured n-tier application performance in a virtualized environment that showed increasing workload caused a competing, co-located constant workload to decrease its response time. In this paper, we investigate...
Cloud computing is a paradigm that offers on-demand scalable resources with the “pay-per-usage” model. Price rises linearly as the resources scale. However, the main challenge for cloud customers is whether the performance is also scaling as the price for the resources. In this paper we analyze both the performance and the cost of a memory demanding web service. The experiments are based on measuring...
Cloud computing is a paradigm that offers on-demand scalable resources with the “pay-perusage” model. Cloud service providers' price rises linearly as the resources scale. However, the main challenge for the cloud customers is “Does the performance scale as the price for the rented resources in the cloud”? Also, how does the performance scales for different server load? In this paper we analyze the...
As the scale of datacenter continues to grow, it is hard to keep servers homogenous, with the same hardware and performance characteristics. Today's datacenters commonly operates on several generations of servers from multiple vendors, and mix both high-end and low-end devices together to deliver service quality requirement with lowest cost. However, the heterogenous environment also complicates the...
There are dramatically increasing interests from both academic and industry in the trend of cloud computing. Cloud computing depends on the idea of computing on demand that provide, support and delivery of computing services with stable and large data space. Our research concerns with improving the searching process in the cloud storage via avoiding the bottleneck in central ontology cloud storage...
Load testing of IT applications is fraught with the challenges of time to market, quality of results, high cost of commercial tools, and accurately representing production like scenarios. It would help IT projects to be able to test with a small number of users and extrapolate to scenarios with much larger number of users. This in turn will cut down cycle times and costs and allow for a variety of...
A central goal of cloud computing is high resource utilization through hardware sharing; however, utilization often remains modest in practice due to the challenges in predicting consolidated application performance accurately. We present a thorough experimental study of consolidated n-tier application performance at high utilization to address this issue through reproducible measurements. Our experimental...
In modern replication storage systems where data carries two or more multiple copies, a primary group of disks is always up to service incoming requests while other disks are often spun down to sleep states to save energy during slack periods. However, since new writes cannot be immediately synchronized onto all disks, system reliability is degraded. This paper develops PERAID, a new high-performance,...
Cloud computing is considered a booming trend in the world of information technology which depends on the idea of computing on demand. Cloud computing platform is a set of scalable data servers, providing computing and storage services. The cloud storage is a relatively basic and widely applied service which can provide users with stable, massive data storage space. Our research concerns with searching...
With the massification of high speed Internet access, recent industry consumer reports show that Web site performance is increasingly becoming a key feature in determining user satisfaction, and finally, a decisive factor in whether a user will purchase on a Web site or even return to it. Traditional Web infrastructure capacity planning has focused on maintaining high throughput and availability on...
This paper argues the need for "smart edge" devices to enhance the performance, functionality, and security of data center networks. Three examples, drawn primarily from the prior networking literature, are used to illustrate this point. The first example is the TCP in-cast problem, wherein highly concurrent TCP flows traverse a limited-buffer LAN switch, degrading system throughput. The...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.