The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
A lack of energy proportionality, low resource utilization, and interference in virtualized infrastructure make the cloud a challenging target environment for improving energy efficiency. In this paper we present OptiBook, a system that improves energy proportionality and/or resource utilization to optimize performance and energy efficiency. OptiBook shares servers between latency-sensitive services...
Most large popular web applications, like Facebook and Twitter, have been relying on large amounts of in-memory storage to cache data and offer a low response time. As the main memory capacity of clusters and clouds increases, it becomes possible to keep most of the data in the main memory. This motivates the introduction of in-memory storage systems. While prior work has focused on how to exploit...
In wireless networks, it is important to realize energy-efficient video delivery. To do this, we introduce energy-efficient video streaming over named data networking (NDN). In our proposed approach, we focus on two areas, namely, to improve the throughput performance by Interest aggregation, and to reduce the overhead energy using playout buffer-size control. We evaluate the power savings realized...
Information Centric Networking (ICN) is a new networking paradigm in which the network provides users with named content, instead of communication channels between hosts. However, many issues, such as naming, routing, resource control, and security, still need to be resolved before it can be realized practically. Further, the energy efficiency of ICNs has not been sufficiently considered. In this...
Cloud Computing is a major area of research. Cost and Load balancing has become an important QoS parameter. Load Balancing directly affects the Reliability, Response Time, Throughput and Energy Efficiency of a Server. A good Load Balanced architecture implies minimized overall time, less server failure, minimized response time, increased throughput and less wastage of energy. Such architecture also...
The era of cloud based multimedia applications has lead to a huge increase in the no. of requests on cloud. The increased no. of requests on cloud leads to an increased workload, making workload balancing an important QoS Parameter. Workload Balancing also leads to a judicious use of resources like electricity etc. and thus promotes the concept of Green IT. The paper presents a new Load Balanced Resource...
This paper presents a methodology and a tool for modeling and simulating job assignment and migrations in large scale cloud infrastructures consisting of hundreds of thousands of processing, storage and networking nodes. Each cloud node, whether a server, or a disk array or a network element can be modeled according to a generalized single node queuing model, with appropriate parameterization and...
Computation is increasingly moving to the data enter. Thus, the energy used by CPUs in the data centeris gaining importance. The centralization of computation in the data center has also led to much commonality between the applications running there. For example, there are many instances of similar or identical versions of the Apache web server running in a large data center. Many of these applications,...
Replication is a widely used technique to provide high-availability to online services. While being an effective way to mask failures, replication comes at a price: at least twice as much hardware and energy are required to mask a single failure. In a context where the electricity drawn by data centers worldwide is increasing each year, there is a need to maximize the amount of useful work done per...
The annual electricity consumed by data transfers in the U.S. is estimated to be 20 Terawatt hours, which translates to around 4 billion U.S. Dollars per year. There has been considerable amount of prior work looking at power management and energy efficiency in hardware and software systems, and more recently in power-aware networking. Despite the growing body of research in power management techniques...
The needs of the research communities in research institutes and Higher Education (HE) establishments are demanding evermore powerful computing resources for supporting complex scientific and industrial simulation and modeling, manipulating and storage of large quantities of data [6, 9]. In this paper we present our experience at the University of Huddersfield (UoH), UK in developing the HPC systems...
Lossless compression and decompression are routinely used in mobile computing devices to reduce the costs of communicating and storing data. This paper presents the results of an experimental evaluation of common compression utilities on Pandaboard, a development platform similar to current commercial mobile devices. We study the compression ratio, compression and decompression throughput, and energy...
Energy efficiency is an important issue for data centers given the amount of energy they consume yearly. However, there is still a gap of understanding of how exactly the application type and the heterogeneity of servers and their configuration impact the energy efficiency of data centers. To this end, we introduce the notion of Application Specific Energy Efficiency (ASEE) in order to rank energy...
Energy efficiency has become one of the most important challenges in designing large-scale clusters due to the commercialized computer components. Traditionally, load balancers are employed by clusters to improve system performance and scalability. However, those balancers do not consider the energy used by the clusters. A power-aware cluster scheduler has been proposed in the community by concentrating...
Multi-tier data centers have become a norm for hosting modern Internet applications because they provide a flexible, modular, scalable and high performance environment. However, these benefits come at a price of the economic dent incurred in powering and cooling these large hosting centers. Thus, energy efficiency has become a critical consideration in designing Internet data centers. In this paper,...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.