The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Critical networked services enable significant revenue for network operators and, in turn, are regulated by Service Level Agreements (SLAs). In order to ensure SLAs are being met, service levels need to be monitored. One technique for this involves active measurements, such as IPSLA. However, active measurements are expensive in terms of CPU consumption on network devices. As a result, active measurements...
The electricity cost of cooling systems can account for 30% of the total electricity bill of operating a data center. While many prior studies have tried to reduce the cooling energy in data centers, they cannot effectively utilize the time-varying power prices in the power market to cut the electricity bill for data center cooling. This is in contrast to the fact that various thermal and energy storage...
In recent years, many service providers have started migrating their service offerings to cloud infrastructure. Sometimes, parts of the service workflow can however not be moved to cloud environments. This can occur due to client policies, or because some services are linked to physical client-site devices. The result of the migration is then a hybrid cloud environment, where part of the services...
Traffic histograms play a crucial role in various network management applications such as network traffic anomaly detection. However, traffic histogram-based analysis suffers from the curse of dimensionality. To tackle this problem, we propose a novel approach called K-sparse approximation. This approach can drastically reduce the dimensionality of a histogram, while keeping the approximation error...
As more and more data centers embrace end host virtualization and virtual machine (VM) mobility becomes commonplace, we explore its implications on data center networks. Live VM migrations are considered expensive operations because of the additional network traffic they generate, which can impact the network performance of other applications in the network, and because of the downtime that applications...
Cloud-based backup and archival services use large tape libraries as a cost-effective cold tier in their online storage hierarchy today. These services leverage deduplication to reduce the disk storage capacity required by their customer data sets, but they usually re-duplicate the data when moving it from disk to tape.
IT service delivery becomes an increasingly challenging business as customers demand improved quality of service while providers are driven to reduce the cost of delivery. While effective service delivery requires advances in many areas including workload management and workforce optimization, in this paper we focus on service request dispatching decision-making. Specifically, we propose an implemenation...
Multi-core architectures with asymmetric core performance have recently shown great promise, because applications with different needs can benefit from either the high performance of a fast core or the high parallelism and power efficiency of a group of slow cores. This performance heterogeneity can be particularly beneficial to applications running in virtual machines (VMs) on virtualized servers,...
Networking infrastructure consumes a sizable fraction of the electricity supply. A network design model aimed at maximizing energy savings by aggregating traffic demand at a small set of resources, to put under-utilized resources to sleep, is offset by legacy models aimed at maximizing the network throughput by spreading the load across network resources. Traffic fluctuations and sudden spikes further...
Growing concern for reduced power dissipation, cost and latency demands in next generation Data Centers (DC) motivates us to revisit header optimizations. Headers contribute to about 30–40% of DC traffic and is responsible for equal proportion of consumed power. This amounts to significant overhead on per byte transfer of payload. In the past, highly inflexible switches have limited the focus of header...
Wireless Sensor Networks (WSNs) provide a flexible communication infrastructure for sensing and control. However, maintaining coverage is one of the most challenging tasks in configuring and deploying WSNs. Although there has been a significant amount research on providing coverage, most of the existing solutions focus on the coverage problem without giving attention to new sensing capabilities and...
The Internet of Things (IoT) is a promising theme of research. Covering subjects from micro-electronic to social sciences with a major field in computing, network and telecommunication. It is judged as the future of the today's Internet. The main idea is to benefit from an ambient intelligence instantiated by objects assisting humans in their daily tasks. One has already imagined use cases and challenging...
Mobile voice-assisted services are currently experiencing strong growth. However, occasionally low real-time quality of service within mobile networks could have significant negative impact on quality of experience of users interacting with automated voice services. Latency may grow to unacceptable levels and speech recognition and synthesis might suffer. We present a methodology of mitigating such...
Network virtualization enables the creation of multiple instances of virtual networks on top of a single physical infrastructure. Given its wide applicability, this technique has attracted a lot of interest both from academic researchers and major companies within the segment of computer networks. Although recent efforts (motivated mainly by the search for mechanisms to evaluate Future Internet proposals)...
HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for adaptive streaming solutions. In HAS, video content is split into segments and encoded into multiple qualities, such that the quality of a video can be dynamically adapted during the HTTP download process. This has given rise to intelligent video players that strive to maximize Quality of Experience (QoE) by adapting the displayed...
Operational services in MANETs, such as resource location and distribution of connectivity information, deal with node mobility and resource constraints to support applications. The reliability and availability of these services can be assured by data management approaches, as replication techniques using quorum systems. However, these systems are vulnerable to selfish and malicious nodes, that intentionally...
Driving productivity transformations in services organizations must be a data-driven exercise. We develop a methodology that exploits effort-data analysis in order to drive productivity optimization initiatives with solid business cases and action plans. A services factory model is applied to a service provider organization to monitor where and how time is spent by the service staff. The results are...
In Cognitive Radio Ad Hoc Networks (CRAHNs), malicious Secondary Users can exploit CR (Cognitive Radio) capabilities to perform Primary User Emulation Attacks (PUEA). These attacks pretend the transmission of a Primary User (PU), giving to malicious users the priority of using licensed frequencies over well-behaved unlicensed Secondary Users (SU). Since CRAHNs are envisioned as a solution for the...
Cloud paradigm facilitates cost-efficient elastic computing allowing scaling workloads on demand. As cloud size increases, the probability that all workloads simultaneously scale up to their maximum demand, diminishes. This observation allows multiplexing cloud resources among multiple workloads, greatly improving resource utilization. The ability to host virtualized workloads such that available...
This work presents models characterizing failures observed during the execution of large scientific applications on Amazon EC2. Scientific workflows are used as the underlying abstraction for application representations. As scientific workflows scale to hundreds of thousands of distinct tasks, failures due to software and hardware faults become increasingly common. We study job failure models for...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.