The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Network Functions Virtualization (NFV) has been expected to flexibly compose Virtual Network Functions (VNFs) by virtualizing existing network appliances and logically chaining them. Currently used VNFs are realized as VM-based appliances and shared by multiple users (VMs). However, the notion of NFV can be extended to reinforce network functionality of user VMs by introducing VM-dedicated VNFs. In...
The project aims to develop a professional Virtual Machine Manager for the KVM hypervisor. It will be a libvirt-based Web Interface for managing virtual machines. It allows creating and configuring new domains, and adjusting a domain's allocation of the underlying hardware resources. A VNC viewer will present a full graphical console to the end-users in the guest domain. To work with this service...
In computing era, a virtual version of a stratagem or resource, such as computer network resources, server, storage device, or a combination of these usually refers as virtualization where the framework segregates the resources into one or more execution instances. A Virtual Machine (VM) created on the host hardware as a software is called a hypervisor or a VM manager. Nowadays, hypervisor based virtualization...
ARINC 653 provides a strong isolation mechanism for safety computing fields, such as aircrafts. seL4, a 3rd generation microkernel, was formally verified for its functional correctness and provides a desirable code base for partitioning operating systems. But there is a long way from seL4 to partitioning. We take the first step and focus on the temporal aspect, i.e., implementing a partitioned scheduler...
We present a novel architecture for sparse pattern processing, using flash storage with embedded accelerators. Sparse pattern processing on large data sets is the essence of applications such as document search, natural language processing, bioinformatics, subgraph matching, machine learning, and graph processing. One slice of our prototype accelerator is capable of handling up to 1TB of data, and...
Google BigTable's scale-out design for distributed key-value storage inspired a generation of NoSQL databases. Recently the NewSQL paradigm emerged in response to analytic workloads that demand distributed computation local to data storage. Many such analytics take the form of graph algorithms, a trend that motivated the GraphBLAS initiative to standardize a set of matrix math kernels for building...
Software memory disclosure attacks, such as buffer over-read, often work quietly and may cause leakage of secrets. The well-known OpenSSL Heartbleed vulnerability leaked out millions of servers' private keys, and caused most of Internet services insecure during that time. Existing solutions are either hard to apply to large code bases, or too heavyweight (e.g. by involving a hypervisor software or...
Most of security measures for network computing are provided by service provider itself. However, this kind of methods cannot be trusted radically by the user for lacking the ability to control the resource directly. The computing environment for the user is unable to know what software will be provided. To address this problem, we present a security scheme, named Cleanroom Monitoring System (CMS),...
A smart home control system could link independent electric appliances together 'intelligently' and provide comprehensive, fast and smooth exchange of information. This system gives people real-time information control, which improves their life quality. In this study, an embedded smart home system based on socket network connections was designed using ARM9 development board and embedded Linux operating...
Many existing issues pertaining to power sector such as-demand response management, theft detection, outage management etc. can be solved efficiently with grid modernization. Out of these, demand response is one such issue which affects the overall grid stability. One way of managing demand response is to balance the load in smart grid (SG). In this paper, a novel scheme for handling the demand response...
Power capping is a fundamental method for reducing the energy consumption of a wide range of modern computing environments, ranging from mobile embedded systems to datacentres. Unfortunately, maximising performance and system efficiency under static power caps remains challenging, while maximising performance under dynamic power caps has been largely unexplored. We present an adaptive power capping...
Software-based network packet processing on standard high volume servers promises better flexibility, manageability and scalability, thus gaining tremendous momentum in recent years. Numerous research efforts have focused on boosting packet processing performance by offloading to discrete Graphics Processing Units (GPUs). While integrated GPUs, residing on the same die with the CPU, offer many advanced...
Process schedulers are part of the core functionality of an operating system (OS), and have been enhanced over the years to account for multiple cores in the processors and to support multi-threaded applications. In this study, we investigate the impact of the Linux scheduler's load-balancing algorithm on the performance of multi-threaded OpenSIPS (an open source SIP proxy server, SPS) running on...
To make use of big data, various NOSQL data stores have been deployed, such as key-value stores and column-oriented stores. NOSQL data stores typically achieve a high degree of scalability, while specialized for some specific purposes; thus, Polyglot persistence that employs multiple NOSQL data stores complementally is a practical choice toward a high diversity of application demands. We assume various...
Cloud computing is the Internet based computing to deliver service. The major components to establish cloud are distributed systems, service oriented computing, web2.0, virtualization and utility computing. The integration of these components are required to make available data anytime and anywhere. Migration of virtual machines is the key issue to manage heterogenous cloud for load balancing. The...
We establish a framework that can be used by Origin Servers (content-generating organizations) for claiming Content Delivery Network (CDN) resources in a fine-grained way. The basis of our work lies in the use of Stocks as well as a Secondary Market for the stock trading, tools and products commonly used in modern capital markets. Network and disk resources are being monitored through well-established...
Modern systems assume that privileged software always behaves as expected, however, such assumptions may not hold given the prevalence of kernel vulnerabilities. One idea is to employ defenses to restrict how adversaries may exploit such vulnerabilities, such as Control-Flow Integrity (CFI), which restricts execution to a Control-Flow Graph (CFG). However, proposed applications of CFI enforcement...
Big memory applications such as in-memory database, denovo assembly application in the human genome sequencing area, big data analytics, and large scale scientific calculation are increasing explosively. However, the big memory system has been too expensive for many researchers and students. Therefore, methods to harvest remotely distributed memory has been considered as a cost effective way to run...
The speed of memory capacity expansion of the computer system has not kept up with the speed of the increase of the memory requirement of large memory applications. Also, big memory system has been too expensive for many researchers and students. Therefore, approaches to utilize remote memory has been considered as a cost effective way to run large memory applications in the cluster environment where...
Computational methods have become an important part of gene delivery research, as they allow researchers to experiment with different models of cellular processes. Models of the gene delivery process based on telecommunication theory make this experimentation especially efficient. Therefore, this paper presents a specialised FPGA-accelerated heterogeneous architecture for simulating the gene delivery...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.