The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In spite of their growing maturity, current web monitoring tools are unable to observe all operating conditions. For example, clients in different geographical locations might get very diverse latencies to the server; the network between client and server might be slow; or third-party servers with external page resources might underperform. Ultimately, only the clients can determine whether a site...
With the rapid development of WEB applications, the demand for dynamically adjusting computing resources based on the load variation is increasing. However, most of the traditional WEB systems have limited ability to respond to load changes. In order to solve the problem, software self-adaptation technology has been applied to the resource management of WEB systems. Many researchers have tried to...
A lack of energy proportionality, low resource utilization, and interference in virtualized infrastructure make the cloud a challenging target environment for improving energy efficiency. In this paper we present OptiBook, a system that improves energy proportionality and/or resource utilization to optimize performance and energy efficiency. OptiBook shares servers between latency-sensitive services...
Modern distributed systems are often considered to be black boxes that greatly limit the potential to understand behaviors at the level of detail necessary to diagnose some of the most important types of performance problems. Recently researchers have found abnormal response time delays, one to two orders of magnitude longer than the average response time, that exist in short periods and cause economic...
The performance of n-tier web-facing applications often suffer from response time long-tail problem. With relatively low resource utilization (less than 50%) and the majority of requests returning within a few milliseconds, a non-negligible num-ber of normally short requests may take seconds to return. We propose the millibottleneck theory of performance bugs (that lead to long-tail problems). Several...
Infrastructure-as-a-Service environments are becoming increasingly popular. When there is a failure, many applications require service restoration within a few seconds. Reaction to failures in Cloud is still slow for many applications. Monitoring is limited to instance metrics that are not conducive to precise diagnosis due to complexity of virtualization in physical hosts. Interferences among different...
Early detection of any exceptional behavior coupled with comprehensive analysis of related data can significantly reduce performance bottlenecks and outages in any system. Collating the relevant data points and establishing correlation between them to provide an abstract view of potential hotspots has been a challenge in any large multi-tier systems. This paper describes a framework that shall enable...
This paper presents an Elastic Cloud Resource Allocation scheme that allocates minimal cloud VM resources that are needed to satisfy a given Service Level Objective (SLO) response time for cloud based elastic applications. More importantly, the algorithm attempts to mitigate any response time violation that could arise during the provisioning of cloud VM instances. Our proposed scheme utilizes queueing...
Integrated monitoring system, enabled with semi-structured datastore, is a promising solution for monitoring SaaS systems. However, according to increasing scale of SaaS systems and their long-term of service operations, the monitoring system has faced the problem in response times of log analysis and storage consumption. Our empirical observation is that the problem is primarily derived from the...
Adaptive content streaming is frequently used as an efficient and cheap solution to achieve a good quality for media streaming, in systems having light Over-the-Top architectures. The streaming system developed here makes an initial optimized server selection based on multi-criteria algorithms and then in-session media adaptation. The focus of this paper is on performance analysis (based on real-life...
This paper covers the importance of performance testing of web applications and analysing the application's bottleneck based on hardware, software and resource utilization. Mainly, the focus would be to performance test the application on different parameters like load, stress, scalability, reliability, security and capacity front. Now a days everyone expects everything to be very fast but at the...
Model based performance gives an efficient support for controlling the quality of service (QOS) as well as the cost of enterprise application. The two important quality measures are service availability and response time in cloud perspective. Though the expensive high decision monitoring probabiliy necessary to identify key model parameter, such as CPU utilization of particular request that are commonly...
Web page response time is a key factor to web user satisfaction. Being able to measure or estimate web page response time is important as it allows identification of users facing slow responding web pages. Remedial actions can then be taken to improve response time perceived by these users, which help to enhance satisfaction level of the users towards the web site. In this paper, we present a server-side...
With rapid growth of demand on big data storage, MongoDB has been a prevalent choice to store unstructured data in recent years. MongoDB evenly distributes data across shard servers to ensure that all the shard servers hold approximately same amount of data and the data access workload will be balanced across these servers. This approach, however, can hardly guarantee the performance of data access...
Fast detection of performance anomalies is critical in Cloud applications, but challenging to implement in a general and effective tool with low operational overload. We propose FSAD, a performance anomaly detection system based on the concept of flow similarity. It stems from the observation that, in general, the number of responses generated by a component closely follows the number of received...
An interactive .NET web application is designed for use in remote monitoring of sensor response within a Local Area Network zone. Key performance metrics of the web application such as response times, throughput, processor and disk utilization are measured by employing a standard testing tool. The impact of concurrent users' activities on the performance metrics of the web application have been observed...
The warranty of privacy of a person's data is understood as the capacity of managing, altering, restricting or publishing for a group of individuals chosen by the person. The shared data can be sensitive revealing something private, which deserves protection when shared, e.g. Personal financial information. Among several computing services, there is much sensitive data without any privacy-preserving...
Modern scientific experiments generate vast volumes of data which are hard to keep track of. Consequently, scientists find it difficult to reuse and share these data sets. We address this problem by developing a schema-independent data cataloging framework for efficient management of scientific data. The proposed solution consists of an agent which automatically identifies new data products and extract...
In this paper, we conduct a detailed study characterizing the performance of multi-tier web applications on commercial cloud platforms and evaluate the potential of techniques to improve the resilience of such applications to performance fluctuations in the cloud. In contrast to prior works that have studied the performance of individual cloud services or that of compute-intensive scientific applications...
We aim at enhancing the Quality of Service (QoS) management in modern Internet applications that heavily rely on the quality of the underlying network. Our goal is to provide the application developers with mechanisms for specifying and controlling high-level, application-related QoS metrics, rather than the traditional low-level, network-related metrics like latency, throughput, packet loss, etc...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.