The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
According to the statistics, there is low resource utilization and high energy consumption in traditional servers. To reduce the cost, more and more companies begin to build virtual servers. Sever virtualization implements the mapping from virtual resources to physical resources and deal with resource contention among all VMs. Because of complexity of virtualized server systems, it is necessary to...
A polynomial fitting model for predicting the RTP packet rate of Video-on-Demand received by a client is presented. This approach is underpinned by a parametric statistical model for the client-server system. This model, namely the PQ-model, improves the robustness of the predictor in the presence of a time-varying load on the server. The advantage of our approach is that if we model the load on the...
Network emulation strikes the balance between using real machines on full-fledged networks and running software models of applications and networks in simulation environments. Advanced Linux features make it possible to emulate entire networks on a single machine, enabling experiments that are much easier to run and repeat. However, some of these features were not designed with the primary purpose...
A modern virtualized data center is highly multifarious environment shared among hundreds of co-located tenants hosting heterogeneous applications. The tenants' virtual machines generate a subset of elephants or mice flows (different in terms of rate, size, duration, and burstiness) based on the type of application they are running. Virtual traffic generated from the tenant's virtual machines traverses...
With the computational power available today, machine learning is becoming a very active field finding its applications in our everyday life. One of its biggest challenge is the classification task involving data representation (the preprocessing part in a machine learning algorithm). In fact, classification of linearly separable data can be easily done. The aim of the preprocessing part is to obtain...
As the huge growth of mobile traffic amount, conventional Radio Access Networks (RANs) suffer from high capital and operating expenditures, especially when new cellular standards are deployed. Software, and cloud RANs have been proposed, but the stringent latency requirements e.g., 1 ms transmission time interval, dictated by cellular networks is difficult to satisfy. We first present a real software...
As cloud becomes a cost effective computing platform, improving its utilization becomes a critical issue. Determining an incoming application's sensitivity toward various resources is one of the major challenges to obtain higher utilization. To this end, previous research attempts to characterize an incoming application's sensitivity toward interference on various resources (Source of Interference...
The Internet has grown quite quickly, requiring more and more processing power each year to handle user requests in a timely fashion. In the multicore world, the addition of server-side threads should help improve server performance. However, several studies have shown that this is not true, identifying the Linux kernel as the possible culprit. Our working hypothesis is that the kernel does not provide...
A recent trend for big data analytics is to provide heterogeneous architectures to allow support for hardware specialization. Considering the time dedicated to create such hardware implementations, an analysis that estimates how much benefit we gain in terms of speed and energy efficiency, through offloading various functions to hardware would be necessary. This work analyzes data mining and machine...
Contemporary cloud environments are built on low-assurance components, so they cannot provide a high level of assurance about the isolation and protection of information. A “multi-level” secure cloud environment thus typically consists of multiple, isolated clouds, each of which handles data of only one security level. Not only are such environments duplicative and costly, data “sharing” must be implemented...
The number of smartphone users has increased rapidly. The interests of many people have grown how to prevent the leakage of security data of their enterprises. There are the technologies for using smartphone devices safely by separating the personal area and the business area. The technologies to assure security when using smartphone devices with open-platform in tight security environment such as...
Malware today often uses very sophisticated methods to avoid being detected on the victim machine itself. However, hiding the actual communication between an attacker and his malware is often neglected by malware authors. As a consequence, intermediate hosts inspecting the incoming and outgoing traffic of the victim host may be able to detect the infection. In this paper, we describe a proof-of-concept...
The performance of a distributed file system significantly affects data-intensive applications that frequently execute I/O operations on large amounts of data. Although many modern distributed file systems are geared to provide highly efficient I/O performance, their operations are nonetheless affected by runtime overhead in data transfer between client nodes and I/O servers. A large part of the overhead...
With the introduction of low power System on a Chip (SoC) processor architectures in enterprise server configurations, there is a growing need to develop the software that will support scale-out, data intensive cloud applications that are deployed in data centers today. In this paper, we describe the design and implementation of a low latency user space fully compliant TCP/IP socket stack on a low...
One of the central building blocks of cloud platforms are linux containers which simplify the deployment and management of applications for scalability. However, they introduce new risks by allowing attacks on shared resources such as the file system, network and kernel. Existing security hardening mechanisms protect specific applications and are not designed to protect entire environments as those...
Monitoring of the high-performance computing systems and their components, such as clusters, grids and federations of clusters, is performed using monitoring systems for servers and networks, or Network Monitoring Systems (NMS). These monitoring tools assist system administrators in assessing and improving the health of their infrastructure.
The problem of combining multi-modal features which extract from characteristics of given Cloud Computing Servers in the pattern recognition system is well known difficult. This paper addresses a novel efficient technique for normalizing sets of features which are highly multi-modal in nature, so as to allow them to be incorporated from a multi-dimensional feature distribution space. The intend system...
Data centers require many low-level network services to implement high-level applications. Key-Value Store (KVS) is a critical service that associates values with keys and allows machines to share these associations over a network. Mostexisting KVS systems run in software and scale out by running parallel processes on multiple microprocessor cores to increase throughput. In this paper, we take an...
How can GPU acceleration be obtained as a service in a cluster? This question has become increasingly significant due to the inefficiency of installing GPUs on all nodes of a cluster. The research reported in this paper is motivated to address the above question by employing rCUDA (remote CUDA), a framework that facilitates Acceleration-as-a-Service (AaaS), such that the nodes of a cluster can request...
Stream processing is a compute paradigm that promises safe and efficient parallelism. Its realization requires optimization of multiple parameters such as kernel placement and communications. Most techniques to optimize streaming systems use queueing network models or network flow models, which often require estimates of the execution rate of each compute kernel. This is known as the non-blocking...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.