The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
HydroTerre is a research prototype platform developed at Penn State for the hydrology community. It provides access to aggregated scientific data sets that are useful for hydrological modeling and research. HydroTerre’s frontend is a web service, and a user query can request creation of a data bundle whose size can vary from a few megabytes to 100’s of gigabytes. In this article, we present software...
As the tremendous momentum cloud computing has grown, the modern data center networks are facing challenge to handle the increasing traffic demand among virtual machines (VMs). Simply adding more switches and links may increase network capacity but at the same time increase the complexity and infrastructure cost. Thus, intelligent VM placement has been proposed to reduce the intra-DC traffic. Prior...
Large scale cloud platforms can benefit from a service that runs a machine learning model to predict disk drive failures. Unlike previous studies in this space, we have combined multiple data inputs for the model and obtained a better model performance compared to earlier published models. In this paper we explain how we developed and deployed the predictive model in a large scale cloud service. To...
The applications hosted in a datacenter share more than just servers; they also share electrical circuits. Datacenter managers provision the power capacity of these circuits to hosted applications, often based on their peak power needs. In this work, we studied the actual and peak power needs of 3 real datacenters, using data from 1) hardware manufacturers and 2) actual, observed power needs to estimate...
Simulation is an important and widely used method for the analysis of the behavior of large systems, many applications exist. Special branches of research are the simulation of very large models using distributed simulation, and embedded simulation, i.e. The coupling of virtual models with physical hardware. In our work, we approach the combination of both challenges, thus we use distributed simulation...
We present a domain-decomposition-based pre-conditioner for the solution of partial differential equations (PDEs) that is resilient to both soft and hard faults. The algorithm is based on the following steps: first, the computational domain is split into overlapping subdomains, second, the target PDE is solved on each subdomain for sampled values of the local current boundary conditions, third, the...
Dynamic resource provisioning becomes a practical approach to achieving high thermal and energy efficiency, improving scalability, and optimizing reliability for e-commercial applications running in modern data centers. In this paper, we propose a self-adjusting model called TERN to predict thermal behaviors of hardware resources for client sessions. Our TERN contains two major components: (1) a resource...
The data growth enhances the need of a method and paradigms responsible to deal with high scalability, reliability and fault tolerance in large amounts of data. Big Data is a framework capable of dealing with this need. This research makes usage of Apache Hadoop, and a Virtual Private Server (VPS) to analyze the performance through benchmark tests executed on locally, geographically distributed, and...
Nowadays developers and end users of data management systems are challenged with the reduction of the "energy consumption footprint" of existing implementations and configurations. In other words, the energy efficiency has to be optimized, either by increasing the performance or by consuming less resources. In fact, there is a big number of factors that influence the performance and energy...
The first part of the paper discusses limitations of current Cloud offerings to efficiently support performance critical applications. A technical simulation from quantum chemistry is used as guiding example. The focus is on I/O performance being the major bottleneck for this kind of application and virtualisation in this area is much less developed compared to other hardware capabilities. Similar...
Data is abundantly present in today's world and the amount of data we generate continues to grow. The representation and structure of this data, however, differs greatly depending on the software or platform. The wide variety of software available shows there is no one optimal way to model data for all software, but when you want to deploy software in the cloud using a Platform-as-a-Service (PaaS)...
Cloud computing resource management for heterogeneous resources, data resources such as the flexibility of the model, this paper proposes a semi-structured resource description definition combines virtualization technology and resource interdependencies between constructs a cloud heterogeneous computing resource model, while the data center for the integration of hardware and software resources, so...
Common data center energy efficiency metrics only work on a high abstraction level and require actually measured values. With these metrics, it is not possible to identify the sources of shortcomings in efficiency or to explore possible changes in configuration or architecture, respectively. In this paper, an alternative metric addressing these drawbacks is introduced. The metric makes use of pre-characterized...
FPGA-based prototyping is nowadays common practice in the functional verification of hardware components since it allows to cover a large number of test cases in a shorter time compared to HDL simulation. In addition, an FPGA-based emulator significantly accelerates the simulation with respect to bit-true software models. This speed-up is crucial when the statistical properties of a system have to...
In Cloud Computing platforms the addition of hardware monitoring devices to gather power usage data can be impractical or uneconomical due to the large number of machines to be metered. CloudMonitor, a monitoring tool that can generate power models for software-based power estimation, can provide insights to the energy costs of deployments without additional hardware. Accurate power usage data leads...
A data center network solution was designed to improve the network performance in processing the search traffic. The design is based on the analysis of the switching network search traffic model. According to the search traffic features, two technology revolutions are explained in this paper. The solution is able to improve data center network performance in processing the search traffic obviously.
A design of integrated monitoring software for understanding the detailed status of applications and hardware devices in a short period of time is proposed. Data models for real-time data and system configurations for easier associations between the real-time data are also proposed. To verify the effec-tivity of the proposed integrated monitoring software, surveys on the operations managers using...
This paper deals with the development procedures of a HiL (Hardware-in-the-Loop) test bench for verification and validation purposes of embedded application software developed using MBD (Model Based Design) engineering methodology. The testing environment have been mainly developed for PLC (Programmable Logic Controller) applications using Matlab®/Simulink® as a simulation environment and the OPC...
The network service providers always prepare hardware design, maintenance and upgrading action plan via experience. This usually leads to large amount of waste resources. Although many complex models have been introduced into the research held of high performance network QOS evaluation, it is hard for such method to be applied into production environment because of the random and complex character...
The field of modeling and simulation was long-time seen as a viable alternative to develop new algorithms and technologies and to enable the development of large-scale distributed systems, where analytical validations are prohibited by the nature of the encountered problems. The use of discrete-event simulators in the design and development of large scale distributed systems is appealing due to their...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.