The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Similar to Virtualization, Linux Containers (LXC) provides high-performance, lightweight computing resource allocation and isolation. Each LXC container has a resource overhead smaller than that of a virtual machine, leading to significantly lower container migration time and making frequent container placement modification a viable optimization technique. Traditional container scheduling mechanisms...
Compared to traditional head-mounted displays (HMD) which are mostly for industrial usage, the more compact and affordable HMDs for mass-market have claimed to be an emerging impact in the gaming industry. Such kind of HMDs, which are represented by Oculus Rift, HTC Vive and Playstation VR, have proved to be a more accessible solution for game players to experience VR gaming. However, the popularity...
On a cluster system running behind the Cloud computing, multiple processes are generated from most applications and then executed on multiple computing nodes. Their processes communicate with each other during their execution. The communication performance among multiple processes plays an important role in the total execution performance of an application. The SDN-enhanced JMS, which we have developed...
Nowadays, users' computation requests to a high-performance computing (HPC) environment have been increasing and diversifying for requiring large-scale simulations and analysis in the various science fields. In order to efficiently and flexibly handle such computation requests, resource allocation of the virtualized computational resources on an HPC cluster system such as Cloud Computing service is...
High performance computing is required for Big Science application because the proliferation and huge amount of scientific data that needs to be analyzed is a serious problem. Traditionally, network resources were generally assumed as a static resource users cannot control on demand. By integrating network programmability to every stage of a scientific workflow, this study explores a next-generation...
In the era of cloud computing, data centers that accommodate a series of user-requested jobs with a diversity of resource usage pattern need to have the capability of efficiently distributing resources to each user job, based on individual resource usage patterns. In particular, for high-performance computing as a cloud service which allows many users to benefit from a large-scale computing system,...
Nowadays, supercomputers play an essential role in high-performance computing. In general, modern supercomuputers are built as a cluster system, which is a system of multiple computers interconnected on a network. In coding a parallel program on such a cluster system, MPI (Message Passing Interface) is utilized. In this paper, we aim to reduce the execution time of MPI Allreduce, a frequently used...
Network performance in high-performance computing environments such as supercomputers and Grid systems takes a role of great importance in deciding the overall performance of computation. However, most Job Management Systems (JMSs) available today, which are responsible for managing multiple computing resources for distribution and balancing of a computational workload, do not consider network awareness...
Whether to be able to efficiently utilize the interconnect in a cluster system is a key factor in deciding the computational performance especially for a class of jobs that require intensive communications between processes. However, most of Job Management Systems (JMSs), which are deployed on cluster systems for load-balancing, do not have any mechanism that takes the pattern and requirements of...
Recently, the concept of Software-Defined Network (SDN), which allows us to administer and configure a network in a centralized and software-programming manner, has gathered network engineers' and researchers' attention rapidly. In particular, the expectation and concern to OpenFlow as an implementation of the SDN is remarkable. As a result, research activities, which include prototyping, implementation,...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.