The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
As faster storage devices become commercially viable alternatives to disk drives, the network is increasingly becoming the bottleneck in achieving good performance in distributed storage systems. This is especially true for erasure coded storage, where the reconstruction of lost data can significantly encumber the system. Thus, a significant amount of research has focused on reducing the amount of...
Efficient network resource sharing in cloud computing environments is a challenging problem. Additionally, network congestion has been reported as a main bottleneck in multi-tenant data centers. In this paper, the problem of virtual machines allocation is tackled based on Software-Defined Network (SDN) resource allocation strategy. Moreover, several important parameters are taken into consideration...
A Smart Grid system involves many applications, such as power grid state monitoring and control, demand response, distribution automation, distributed generation and microgrids. They will generate a large volume of data traffic over the grid with different quality of service (QoS) requirements, which present challenges to the existing network architecture and protocols. In this paper, we propose to...
The control of excessive downloads by rogue users in organizational LANs is the subject of this work. Two mechanisms have been used in order to accomplish this. The first mechanism, is TCP rate control (TCR), it is a receiver-based flow control technique that can be used to effectively rate limit rogue users' flows, making more bandwidth available to regular users. The second mechanism, admission...
HPC are considered as increasingly importance but only a small set of large enterprises or governments have the capability to use this high performance approach. In order to deliver HPC service and solve software dependency problems which rigidly restrict the usage of HPC applications. Based on Fat-Tree network topology and the virtual HPC cluster model, this paper provides a cloud of HPC delivery...
Flexible spectrum ROADMs facilitate networks to support channels operating at heterogeneous line rates by allocating spectral resources in a dynamic and flexible manner leading to better Spectral Efficiency. In a realistic network, traffic scenario is very dynamic in nature leading to continuous tear down of existing demands and set up of new demands. This dynamic tear down and set up process leads...
Nowadays data centers are attracting a huge attention of researchers. The performance of the data centers is key to hold the successes of cloud computing. As dimensions of cloud computing are enhanced by offering multiple services, the responsibility of service provider escalates many folds. Even hosting single service per data center requires promising conduct round the clock. Incompetency in fulfilling...
Understanding the characteristics and requirements of applications that run on commodity clusters is key to properly configuring current machines and, more importantly, procuring future systems effectively. There are only a few studies, however, that are current and characterize realistic workloads. For HPC practitioners and researchers, this limits our ability to design solutions that will have an...
Reading and writing data efficiently from storage systems is critical for high performance data-centric applications. These I/O systems are being increasingly characterized by complex topologies and deeper memory hierarchies. Effective parallel I/O solutions are needed to scale applications on current and future supercomputers. Data aggregation is an efficient approach consisting of electing some...
We survey network topologies, in particular networks with full all-to-all bandwidth scaling. For more detailed study, we select several recently introduced, promising networks that are cheaper than a 3-level Fat-tree. Through a combination of analysis and simulation on selected supercomputer workloads, we compare these networks according to desirable network properties such as robust performance,...
In Peer-to-Peer (P2P) video streaming systems, a stream of chunks is delivered from the source to all participant peers by utilizing upload bandwidth of the participants. The given stream is generally divided into several sub-streams and those sub-streams are delivered through different spanning trees. In this paper, we consider the delivery of a sub-streams to n subscribers through a spanning trees...
Resource isolation of the computation and storage in the cloud is relatively mature, but the network resource is still shared among tenants leading to variable and unpredictable network performance when bandwidth guarantees are not enforced. Currently most of the bandwidth guarantee approaches are based on the idea of single-path reservation without fully exploiting the multipath resource, which leads...
There are many high-speed TCP variants with different congestion control algorithms, which are designed for specific settings or use cases. Distinct features of these algorithms are meant to optimize different aspects of network performance, and the choice of TCP variant strongly influences application performance. However, setting up tests to help with the decision of which variant to use can be...
Multipath forwarding consists of using multiple paths simultaneously to transport data over the network. While most such techniques require endpoint modifications, we investigate how multipath forwarding can be done inside the network, transparently to endpoint hosts. With such a network-centric approach, packet reordering becomes a critical issue as it may cause critical performance degradation....
Data centers offer computational resources with various levels of guaranteed performance to the tenants, through differentiated Service Level Agreements (SLA). Typically, data center and cloud providers do not extend these guarantees to the networking layer. Since communication is carried over a network shared by all the tenants, the performance that a tenant application can achieve is unpredictable...
Three-dimensional (3D) integration is considered as a solution to overcome capacity, bandwidth, and performance limitations of memories. However, due to thermal challenges and cost issues, industry embraced 2.5D implementation for integrating die-stacked memories with large-scale designs, which is enabled by silicon interposer technology that integrates processors and multiple modules of 3D-stacked...
SpaceWire is valuable because it facilitates the development of spacecraft subsystems such as payload instruments, mass memory, and onboard computers. On the other hand, it takes much time and effort for developers to configure an initiator of the SpaceWire network because they have to take account of the entire SpaceWire network in a spacecraft. As the target network becomes larger, the path addressing...
Network function virtualization (NFV) has drawn much attention in recent years, where some network functions that used to be deployed on specific hardware have become virtualized instances on general servers to achieve more scalability and flexibility. In a data center, service function chaining (SFC) makes a workflow traverse different network functions in a specific order to provide different levels...
This paper quantifies the difference in resource demand between modern and classic NoC workloads. In the paper, we show that modern workloads are able to better utilize higher numbers of VCs and smaller C factors in order to attain performance and energy efficiency. This is because of the high throughput and possible local congestions in their traffic pattern. As a result, such workloads are more...
To achieve high throughput, core count in compute accelerators such as General-Purpose Graphics Processing Units (GPGPUs) increases continuously. The communication demand of these cores boosts the demand for a low-latency packet switched network. As packet latency is mainly composed of per-hop latency, contention latency and serialization latency, a favorable Network-on-Chip (NoC) design should efficiently...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.