The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this paper, we present two methods using Neural networks to mine virtual machine usage data. For the one-for-all training method, we use the trained model to predict the whole weeks' data. For the separated model, we cut the testing set into seven smaller sets of each day, then use the corresponding model to predict that particular day's data. A whole weeks' data are used as testing set. The final...
Big Data technologies like hadoop are transforming analytics and processing, but is there a role mainframe can play in improving big data analytics deployed in organizations' private clouds? Over the period of 60 years, mainframe has contributed extensively to the IT industry but has been replaced by newer machines and methods these days. In this paper we will examine some potential advantages mainframe...
Indexing play an indispensable role in Search Engine. Indexing empower ease of mining of data and lessen the latency of searching a term in huge documents. In this paper, we propose a methodology to index documents in a parallel - distributed manner. Define Metadata structure of a document for indexing; from the metadata, the occurrence of a word shall be ascertained by document wise, page number...
Virtualization has been widely adopted by lots of business companies. It is now becoming an effective way to manage massive hardware resources in flexible scales. While power and hardware cost can be significantly saved through virtualization, the fault detection of system becomes more difficult due to the increasing scalability and complexity in a virtualized environment. In this paper, a fault detection...
Expansion of IT applications creates serious needs for environments with huge storage. In storage systems, capacity and agility are two factors which face limitations. This article evaluates the role of different techniques with regard to the possible methods for accessing to the storage systems. We have prepared three different scenarios using direct, semi-virtual, and virtual attachment models....
With the resurgence of virtualization technologies and the development of multi-core technologies, the combination of the two becomes a trend. Therefore, inter-VM communication becomes a key part in how to improve the performance of virtual machines (VMs) basing on multi-core platform. In this paper, we first analyze the characteristics of multi-core tasks and the properties of virtual machine environment,...
ROSA is an overlay network which, deployed on a network, ensures a endogenous routing more resilient than IP. The ROSA routing detects and bypasses failures of routers, gateways and other underlay physical devices. Once installed on an information system, ROSA is completely autonomous, does not need any additional human intervention for its maintenance. ROSA has been proved scalable since it was shown...
Energy efficiency in the field of information and communication technology becomes increasingly important due to the increase in energy costs and the desire to reduce CO2 emissions. Office environments of public administration and companies promise high potential in terms of energy saving. In such environments a high number of hosts are operating on a 24/7 basis. This paper suggests an Energy-Efficient...
Lately solutions for remote access for residential services have been proposed. However, these solutions require modifications to the service controllers. In addition, remote access adds complexity to the client application. We propose here a solution for decoupling remote access from the client itself with an entity that creates virtual instances of remote services in a local network. Thereby, clients...
Modern data centers use virtual machine based implementation for numerous advantages like resource isolation, hardware utilization, security and easy management. Applications are generally hosted on different virtual machines on a same physical machine. Virtual machine monitor like Xen is a popular tool to manage virtual machines by scheduling them to use resources such as CPU, memory and network...
Managing virtual machines (VM) in large scale enterprise grid scenarios, commonly encountered in data centers, is extremely challenging. Currently, live VM migration is based on QoS non-conformance events; migration of a VM is initiated as soon as the aggregate resource (CPU and memory) requirements of the VMs on the physical machine (PM) exceed the capacity available on the PM. However, this paper...
Server virtualization is a key technology for today's data centers, allowing dedicated hardware to be turned into resources that can be used on demand.However, in spite of its important role, the overall security impact of virtualization is not well understood.To remedy this situation, we have performed a systematic literature review on the security effects of virtualization. Our study shows that,...
I/O virtualizations especially NIC ones is a hot spot in the research filed. In order to utilize the global NIC resources deployed in the distributed virtual machine monitor system, this paper proposes a new approach to implement the NIC virtualization for DVMM. The approach is implemented by the hybrid of hardware-assisted virtualization and the single system image technologies, and resides in the...
Virtual systems and virtualization technology are taking the momentum nowadays in data centers and IT infrastructure models.Performance analysis of such systems is very invaluable for enterprises but yet is not a deterministic process. Single-workload benchmark is useful in quantifying the virtualization overhead within a single VM, but not useful in whole virtualized environment with multiple isolated...
One of the novel benefits of virtualization is its ability to emulate many hosts with a single physical machine. This approach is often used to support at-scale testing for large-scale distributed systems. To better understand the precise ways in which virtual machines differ from their physical counterparts, we have started to quantify some of the timing artifacts that appear to be common to two...
This paper describes the integration of the presence management service into existing IP Multimedia Subsystem (IMS) laboratory infrastructures. Both laboratories - NGNlab in Bratislava and the laboratory in Leipzig - use the Fraunhofer OpenIMS as core components for their testing IMS environment (i.e. Call Session Control Functions (CSCF), Home Subscriber Server (HSS)). The virtualization of these...
The purpose of this demonstration is to show the functionality of EDIV tool to manage distributed virtualization scenarios that are deployed on PASITO, a federated experimentation infrastructure created and coordinated by RedIRIS (the Spanish Research and Education Network). The demo presents the different phases a researcher would follow to create and deploy a virtual network scenario to experiment...
As long as computers continue to get more CPU processing power, data centers need to optimize their power usage. We can do this and maintain the same complexity level as before by using virtualized environments. We can put a large number of small isolated servers, inside a large one and improve a large number of values like the wattage or power consumption, space usage, and resource usage. In this...
The CARRIOCAS project studies and implements a high-bit-rate optical network (up to 40Gb/s per wavelength) to enable high-performance applications in numerical design, virtual prototyping and scientific research to access to shared high capacity computing and storage resource. The project researches cover optical components and systems, network architecture and management, distributed file system...
In this paper, we present an approach for software rejuvenation based on automated self-healing techniques that can be easily applied to off-the-shelf application servers. Software aging and transient failures are detected through continuous monitoring of system data and performability metrics of the application server. If some anomalous behavior is identified, the system triggers an automatic rejuvenation...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.