The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Network Function Virtualization (NFV) is an emergent paradigm that is currently transforming the way network services are provisioned and managed. The main idea of NFV is to decouple network functions from the hardware running them. This allows to reduce deployment costs and further improve the flexibility and the scalability of network services. Despite these benefits, a major challenge cloud providers...
Despite the known benefits of hosting cloud-based services, the longer and often unpredictable end-to-end network latencies between the end user and the cloud can be detrimental to the response time requirements of the interactive cloud-hosted applications. Existing efforts that exploit edge/fog technology to migrate services closer to clients in order to improve response times do not fully resolve...
Cloud service providers are trying to reduce their operating costs while offering their services with a higher quality via resorting to the concept of elasticity. However, the vast majority of related work focuses solely on guaranteeing the quality of service (QoS) of interactive applications such as Web services. Nevertheless, a broad range of applications have different QoS constraints that do not...
Systems for processing large scale analytical workloads are increasingly moving from on-premise setups to on-demand configurations deployed on scalable cloud infrastructures. To reduce the cost of such infrastructures, existing research focuses on developing novel methods for workload and server consolidation. In this paper, we combine analytical modeling and non-linear optimization to help cloud...
To execute cloud computing tasks over a data center hosting hundreds of thousands of server nodes, it is natural to distribute computations across the nodes to take advantage of parallel processing. However, as we allocate more computing resources and further distribute the computations, a large amount of intermediate data must be moved between consecutive computation stages among the nodes, causing...
We propose a method of constructing a layer 3 (L3) network that consists of a large number of virtual machines (VMs) for a large-scale IoT emulation testbeds by utilizing Hierarchical/Automatic Number Allocation Protocol (HANA). The L3 network consists of several subnets, and suppresses the number of MAC addresses that must be distinguished from each other. HANA releases a burden of an L2 switch,...
Network Function Virtualization (NFV) is a new paradigm, enabling service innovation through virtualization of traditional network functions located flexibly in the network in form of Virtual Network Functions (VNFs). Since VNFs can only be placed onto servers located in networked data centers, which is the NFV's salient feature, the traffic directed to these data center areas has significant impact...
This paper considers centralized coded caching, where the server not only designs the users' cache contents, but also assigns their cache sizes under a total cache memory budget. The server is connected to each user via a link of given finite capacity. For given link capacities and total memory budget, we minimize the worst-case delivery completion time by jointly optimizing the cache sizes, the cache...
Cloud computing is a promising framework providing a variety of solutions, ranging from software services to infrastructure services through the mechanism of customizable virtual instances. The cloud manager is responsible for resource provisioning for these instances to provide guaranteed performance but at the same time avoiding underutilization of the platform. In this paper, we introduce a novel...
Resource management of modern datacenters needs to consider multiple competing objectives that involve complex system interactions. In this work, Linear Temporal Logic (LTL) is adopted in describing such interactions by leveraging its ability to express complex properties. Further, LTL-based constraints are integrated with reinforcement learning according the recent progress on control synthesis theory...
Real-time deferrable server (RTDS) scheduler is presented since Xen 4.5. Under RTDS, a guaranteed physical CPU capacity is provided to every virtual CPU so that the performance can be better predicted. However, the guaranteed capacity is defined off-line, it might not fit the requirement of a virtual CPU at the run-time. In this paper, an RTDS-based CPU scheduler is proposed, called enhanced real-time...
As the fundamental of cloud computing, efficient scheduling for both computing and storage resource is important for effectiveness of applications in data centers. In this paper, we jointly consider the scheduling for both computing and storage resource in data centers. To solve this coupled placement problem, we apply and extend the three-sided stable matching theory to model the problem to be three-sided...
In virtualized datacenters (vDCs), dynamic consolidation of virtual machines (VMs) is used as one of the most common techniques to achieve both energy-and resource-utilization efficiency. Live migrations of VMs are used for dynamic consolidation but due to dynamic resource demand variation of VMs may lead to frequent and non-optimal migrations. Assuming deterministic workload of the VMs may ensure...
This paper presents a method of cloud resource allocation designed to take into account both consumers and providers' interests. This comes in contrast to today's provider centered models that subject users to more restrictive terms and conditions. Both parties' interests are computed in the form of integer constraints. Costs and availability are embedded as key objectives and performance criteria...
Resource allocation strategy has been a hot anddifficult research topic in the field of cloud computing. We address the problem of resource fairness allocation in heterogeneous cloud computing where the multiple types of resource are considered, which is computationally intractable. There is a significant gap between the solutions obtained by existing heuristic algorithmsand the optimal solutions,...
Web Real-Time Communication or Realtime communication in the Web (WebRTC/RTCWeb) is a prolific new standard and technology stack, providing full audio/video agnostic communications for the Web. Service providers implementing such technology deal with various levels of complexity ranging anywhere from: high service distribution, multi-client integration, P2P and Cloud assisted communication backends,...
The steadily increasing success of Cloud Computing is causing a huge rise in its electrical power consumption, contributing to higher energy costs, as well as to the greenhouse effect and the global warming. One of the most common key strategies to reduce the power consumption of data centers is the consolidation of virtual machines, whose effectiveness strongly depends on a reliable forecasting of...
Virtualization of computing and communication infrastructureswere disseminated as possible solutions for networksevolution and deployment of new services on clouddata centers. Although promising, their effective applicationfaces obstacles, mainly caused by rigidity on the managementof communication resources. Currently, the Software-DefinedNetworks (SDN) paradigm has been popularizing customizationand...
Customers often suffer from the variability of data access time in cloud storage service, caused by network congestion, load dynamics, etc. One solution to guarantee a reliable latency-sensitive service is to issue requests with multiple download/upload sessions, accessing the required data (replicas) stored in one or more servers. In order to minimize storage costs, how to optimally allocate data...
Next-generation networks are expected to support low-latency, context-aware and user-specific services in a highly flexible and efficient manner. Proposed applications include high-definition, low-latency video streaming, remote surgery, as well as applications for tactile Internet, virtual or augmented reality that demand network side data processing (such as image recognition, transformation or...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.