The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The ExaNeSt project started on December 2015 and is funded by EU H2020 research framework (call H2020-FETHPC-2014, n. 671553) to study the adoption of low-cost, Linux-based power-efficient 64-bit ARM processors clusters for Exascale-class systems. The ExaNeSt consortium pools partners with industrial and academic research expertise in storage, interconnects and applications that share a vision of...
One of the prominent problems in cloud datacenters is the unpredictability of tenants' applications. Although this problem has been recognized, prior solutions do not consider the bandwidth allocation from both the perspective of tenants' network resource requests and the applications' actual network requirements. To address this issue, we present SpongeNet+, a comprehensive solution that consists...
MeteoSwiss, the Swiss national weather forecast institute, has selected densely populated accelerator servers as their primary system to compute weather forecast simulation. Servers with multiple accelerator devices that are primarily connected by a PCI-Express (PCIe) network achieve a significantly higher energy efficiency. Memory transfers between accelerators in such a system are subjected to PCIe...
Reading and writing data efficiently from storage systems is critical for high performance data-centric applications. These I/O systems are being increasingly characterized by complex topologies and deeper memory hierarchies. Effective parallel I/O solutions are needed to scale applications on current and future supercomputers. Data aggregation is an efficient approach consisting of electing some...
Energy consumption represents a large percentage of the operational expenses in data centers. Most of the existing solutions for energy-aware scheduling are focusing on job distribution and consolidation between computing servers, while network characteristics are not considered. In this paper, we propose a model of power and network-aware scheduling that can be tuned to achieve energy-savings, through...
We propose a novel inter-layer path control mechanism in a PCE-VNTM cooperative multi-layer networks (MLN). For this, we define a novel FA-LSP aware PCE which only reports and manages a forwarding adjacency-label switched path (FA-LSP) state. In our scheme, the FA-LSP can be distinguished from the pure higher-layer TE link more and accurately controlled with a network policy. We worked out the methodology...
The problem of ensuring virtual network (VN) connectivity in presence of multiple link failures in the substrate network (SN) is not well investigated in Network Virtualization (NV) literature. We name this problem as Connectivity-aware Virtual Network Embedding (CoViNE). Solving CoViNE will enable a VN operator to perform failure recovery without depending on the SN provider, similar to the IP restoration...
This paper is a broad introduction to the resource allocation problems in cloud systems including Inter-Clouds and Mobile Clouds as well as proposed solutions to these problems. Allocation of computing and network resources to cloud tasks requires innovative approaches in each case of cloud data centers, Inter-Clouds and geographically distributed clouds in order to optimize various performance criteria,...
In today's production-grade cloud datacenter, cloud service providers do not offer any bandwidth guarantees between VMs, which results in unpredictable performance of tenants' applications. To address this issue, we present SpongeNet, a solution that provides bandwidth guarantees for tenants with a novel network abstraction model and a two-phase VM placement algorithm. Prior solutions have significant...
In this paper, we consider a disruption tolerant network (DTN) which enables data transmission with intermittent connectivity and in which instantaneous end-to-end path between a source and destination may not exist. We explore routing problems in such networks in consideration of limited storage for each intermediate node. A graph model called the storage enhanced time-varying graph (STVG) is presented,...
Path protection is essential in a carrier's backbone transport network that requires high reliability. Furthermore, it is preferable for a carrier to repair failed facilities as quickly as possible because path protection is not effective against multiple failures, which impair both primary and secondary paths at the same time. To reduce the operating expenditure involved in quick repair, a dynamic...
Software-Defined Networking not only addresses the shortcoming of traditional network technologies in dealing with frequent and immediate changes in cloud data centers but also made network resource management open and innovation-friendly. To further accelerate the innovation pace, accessible and easy-to-learn testbeds are required which estimate and measure the performance of network and host capacity...
To study scientific application different computing paradigm used. Simulation of tool base Grid infrastructure plays its roll for study of grid base computation. We introduce the here Grid computing paradigm for resource coordination over the global computation world. Resources management and scheduling of applications in such large-scale distributed systems is a complex undertaking in case of Grid...
It is crucial to guarantee content integrity in Named Data Networking(NDN), where copies of the contents are distributed over the network. NDN adopts digital signatures, and contents are verified whenever they are stored in caches. However, the current scheme is not practical in practice since its operations incur too much overhead. In this paper, we suggest a simple but effective solution for content...
After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the networks...
Simulation is an important method to evaluate future computer systems. However, the increasing complexity of the target systems has made the development of simulators very difficult. Furthermore, detailed simulation of large-scale parallel architecture is so slow that full evaluation of real application becomes a great challenge. This paper presents SimICT, a fast and flexible simulation framework...
Analytical modeling plays a unique and important role in computer architecture design, providing a first-order estimate, reducing the design search space size for simulation, or giving insights on basic relationships between various variables and parameters. However, current practices in using analytical models face considerable hurdles: models vary widely in types, predictive capability, and assumptions...
There has been a vast amount of work to develop programming models that provide good performance across machine architectures, are easy to use, and have predictable performance. Similarly, the design and optimization of architectures to achieve optimal performance for an application class remains a challenging task. Accurate cost modeling is essential for both application development and system design...
Emergence of peer-to-peer (P2P) networks has provided a new dimension to overlay networks. People can help each other by sharing the contents of which other people may not have strength to purchase. Besides these generous peers, there are some other peers who always try to harm the network by providing harmful data or just block the activities with invalid responses and dropping lookup queries to...
Given the high demand for Peer to Peer applications (P2P), it becomes essential to develop an architecture to operate this type of network to use efficiently the bandwidth and minimize the existing overhead for the maintenance of its topology. Among the existing architectures for P2P networks, the Chord is notable for possessing a powerful search system, but on the other hand it has a high overhead...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.