The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Non-Volatile Memories (NVMs) can significantly improve the performance of data-intensive applications. A popular form of NVM is Battery-backed DRAM, which is available and in use today with DRAMs latency and without the endurance problems of emerging NVM technologies. Modern servers can be provisioned with up-to 4 TB of DRAM, and provisioning battery backup to write out such large memories is hard...
Memory and logic integration on the same chip is becoming increasingly cost effective, creating the opportunity to offload data-intensive functionality to processing units placed inside memory chips. The introduction of memory-side processing units (MPUs) into conventional systems faces virtual memory as the first big showstopper: without efficient hardware support for address translation MPUs have...
Recent advancement in technologies is reshaping our daily lives. The transformation has changed our traditional class rooms to virtual class rooms which enables millions of people to access knowledge at their fingertips. Recently, Thin Client is one of the mostly adopted technologies in higher education. Universities in Saudi Arabia are among the pioneer international universities who embraced Thin...
The use of Graphics Processing Units (GPUs) has become a very popular way to accelerate the execution of many applications. However, GPUs are not exempt from side effects. For instance, GPUs are expensive devices which additionally consume a non-negligible amount of energy even when they are not performing any computation. Furthermore, most applications present low GPU utilization. To address these...
Recent deployments of FPGAs as compute resources in data centers have raised security concerns. One concern is how to prevent user-deployed logic in the FPGA from accessing privileged data such as physical addresses or raw network traffic. Addressing this issue uses the concept of ‘privileged’ mode FPGA logic that is kept separate from ‘user’ mode logic. Logical separation can be achieved with design...
Cloud computing is an on-demand access model for computing resources most notably embodied by the OpenStack project. As of release Liberty, OpenStack supports provisioning Bare-metal, Virtual machine (VM) and container based hosts. These different hosts incur different overheads. Consequently, the main goal of this paper is to empirically quantify that overhead through a series of experiments. The...
Memory reliability is a key factor in the design of warehouse-scale computers. Prior work has focused on the performance overheads of memory fault-tolerance schemes when errors do not occur at all, and when detected but uncorrectable errors occur, which result in machine downtime and loss of availability. We focus on a common third scenario, namely, situations when hard but correctable faults exist...
Instead of scaling an application and data around the computer, programmers can use a software-defined server—an inverse hypervisor—in which multiple physical machines run a single virtual machine. Memory can be expanded as needed without modifying the application or limiting its data.
In this paper, we elucidate the main performance bottleneck in realizing packet processing which require carrier-scale huge tables to lookup (carrier-scale packet processing) on top of general-purpose servers. Our experimental quantitative analysis using DPDK-Click based packet processing model reveals that performance degradation of the carrier-scale packet processing is mainly caused by increased...
Big data and machine learning are rapidly developing fields with evolving and increasingly diverse hardware requirements. The goal of this project was to demonstrate that an enterprise-ready, Warewulf-based HPC compute cluster could support heterogeneous hardware via the integration of a GPU compute server. The benefits of this were two-fold. First, the integration of the GPU compute server into the...
An Open Ethernet Drive (OED) is a new technology that encloses into a hard drive (HDD or SSD) a low-power processor, a fixed-size memory and an Ethernet card. In this study, we thoroughly evaluate the performance of such device and the energy requirements to operate it. The results show that first it is a viable solution to offload data-intensive computations on the OED while maintaining a reasonable...
Digital systems used in critical infrastructures have to fulfill ever higher demands on performance and cost efficiency. Thus, there is the trend to commercial off-the-shelf processors. To ensure a correct functioning of such devices, even after a long time of operation, mechanisms to recover from permanent hardware faults (e.g. due to wear-out effects) are needed. However, there is a lack of flexible...
Virtual teaching is becoming relevant in the offer of many Universities. ‘Virtual Campus’ platforms allow the distribution of contents and the communication with students. Nevertheless, part of the problems are found when we want to carry out laboratory experiments. These experiments play a fundamental role in Electronics teaching, and although they can be complemented with simulation, they can not...
Computing systems servers -low- or high-end ones have been traditionally designed and built using a main-board and its hardware components as a “hard” monolithic building block; this formed the base unit on which the system hardware and software stack design build upon. This hard deployment and management border on compute, memory, network and storage resources is either fixed or quite limited in...
The article gives a formulation of the problem the study of reliability of high availability information system on the base of the architecture of server redundancy. It offers a way of solving the problem in the form of construction of a random process that describes the structural degradation of the information system according to failures of its elements. There is a detailed review of the restrictions...
This paper proposes a methodology for the establishing a honeynet in which host machine work as honeywall and thus take advantage of underlying architecture on which they are deployed. Such setup help to minimize the CPU and RAM load for running extra virtual machine for CDROM Roo. In this implementation various types of honeypots continue to run in virtual environment using Virtual Box as in case...
This paper presents an in-depth characterization of the resiliency of more than 5 million HPC application runs completed during the first 518 production days of Blue Waters, a 13.1 petaflop Cray hybrid supercomputer. Unlike past work, we measure the impact of system errors and failures on user applications, i.e., the compiled programs launched by user jobs that can execute across one or more XE (CPU)...
Traditional database management systems (DBMSs) running on powerful single-node servers are usually over-provisioned for most of their daily workloads and, because they do not show good-enough energy proportionality, waste a lot of energy while underutilized. A cluster of small (wimpy) servers, where its size can be dynamically adjusted to the current workload, offers better energy characteristics...
Cloud computing makes extensive use of virtual machines because they permit workloads to be isolated from one another and for the resource usage to be somewhat controlled. In this paper, we explore the performance of traditional virtual machine (VM) deployments, and contrast them with the use of Linux containers. We use KVM as a representative hypervisor and Docker as a container manager. Our results...
Total Cost of Ownership (TCO) is a key optimization metric for the design of a datacenter. This paper proposes, for the first time, a framework for modeling the implications of DRAM failures and DRAM error protection techniques on the TCO of a datacenter. The framework captures the effects and interactions of several key parameters including: the choice of DRAM protection technique (e.g. single vs...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.